2013 High-End GPU War

CHANG3D

Supreme [H]ardness
Joined
Jul 23, 2010
Messages
4,975
  • Tegra 4

    http://www.anandtech.com/show/6666/the-tegra-4-gpu-nvidia-claims-better-performance-than-ipad-4

    As the title (of the above article) says, NVidia claims Tegra 4's GPU has better performance than A6X's PowerVR SGX 554MP4. A leaked GLBenchmarks result, from something is not even as good as the final product, seemingly proved it. Anandtech also talks briefly about theoretical numbers, which points to the result likely being true. Still, the biggest flaw is its non-unified architecture, like the Tegra 3.

    Before someone say it is Kepler, it's NOT Kepler. But if the Tegra 5 does bring Kepler, I don't see how NVidia could loose (to whatever Apple has got next).

  • Exynos 5 Octa

    http://www.anandtech.com/show/6654/...a-powered-by-powervr-sgx-544mp3-not-arms-mali

    If you had checked the link in the Tegra 4 section, you can also see the GLBenchmarks result which shows that Exynos 5 GPU will suck compared to the A6X. Anandtech's theoretical numbers put it in between the A5X and the A6X.

  • Snapdragon 800

    http://www.anandtech.com/show/6568/qualcomm-krait-400-krait-300-snapdragon-800

    Qualcomm's next Snapdragon 600 and 800 have the Adreno 320 and 330 respectively. Current high-end S4, like the one in the Nexus 4, has the Adreno 320 too, which is great for movies and not as good for gaming. The Adreno 330 has supposedly 50% better in graphics performance and twice the computing ability as the Adreno 320. The Adreno 320 already is close to A6X if not better in movies, but gaming graphics is not even close. 50% better in graphics means 1.5 times the current numbers (in case someone wants to do math on it) of the Adreno 320. So theoretically, by the numbers, the Adreno 330 will be close to the A6X but still loses in most tests.

  • Next Apple GPU

    http://en.wikipedia.org/wiki/Apple_Ax

    Obviously, Apple's GPU is the standard bearer. All of these are theoretically compared to Apple's current generation GPU, the PowerVR SGX 554MP4. NVidia can claim victorious over it, but I'm pretty sure Apple can come up with a new one easily that will be on top. (Until the Tegra 5 goes kepler or Samsung matches the GPU that is.)

    I'm assuming, if we look at Apple's SoC history, something new will come by mid-year. And it's probably a quad-core Swift as opposed to a dual in A6/A6X. The GPU will likely be the same architecture, but with higher MHz and likely 2 or 4 more cores. With that, theoretically, the next Apple PowerVR GPU will still be the champ.

  • Dark Horse Candidate: Intel Bay Trail

    http://arstechnica.com/gadgets/2012...el-atom-socs-targets-pcs-servers-and-tablets/

    Intel will be using Intel's own Gen 7 HD graphics for their Atom SoC. I don't know about the power consumption on the GPU, but we know that the 22nm Bay Trail CPU power consumption will be very competitive if not way better than the 28nm Cortex-A15. If we get the performance of Intel's HD2500 on a phone, it'll probably be competitive. If HD4000, we got a killer. But power consumption levels tell me that probably won't happen...
 
Last edited:
Does Apple own PowerVR? they implement the Graphics core into their SOC but I don' think they own it, where as nVidia and Qualcomm do own their Gfx tech.
 
You had my attention until I read that Apple fanboi bs

Why? It seems that IF it's true that it'll be a quad version of the current dual, then it would be faster than the competitors that are only around or a little above the current.
 
Purely theoretical and based on nothing but fanboi-ism.

You seem pretty hard up anti apple. Considering their past track record on apple soc's I would say his speculation has merit.
 
Last edited:
Purely theoretical and based on nothing but fanboi-ism.

How is it speculation? We know that the competitors will match or slightly exceed the current A6X GPU.

If the next-gen has double the resources, even if it obviously doesn't double the performance, isn't it more reasonable to assume it'll outperform its predecessor and thus its competitors?

It seems to be more blatant fanboi-ism to assume otherwise.
 
You had my attention until I read that Apple fanboi bs
You obviously aren't around when I referred to iOS as windows 95... I'm just so fanboi at I even got an infraction here for talking trash about iPhones... LOL.

Time and time again, I do mention the word "theoretical." I wonder why I did it...

And if you want to disprove my theories, go right ahead. But you're not really offering much in terms of evidence. (You're just like someone else too, but at least he has offered real evidence that actually supported his theories once.)

Does Apple own PowerVR? they implement the Graphics core into their SOC but I don' think they own it, where as nVidia and Qualcomm do own their Gfx tech.
No, but what does that got to do with it? Apple's implementation of it is well beyond what Samsung could do with it this year.
 
You obviously aren't around when I referred to iOS as windows 95... I'm just so fanboi at I even got an infraction here for talking trash about iPhones... LOL.

Time and time again, I do mention the word "theoretical." I wonder why I did it...

And if you want to disprove my theories, go right ahead. But you're not really offering much in terms of evidence. (You're just like someone else too, but at least he has offered real evidence that actually supported his theories once.)


No, but what does that got to do with it? Apple's implementation of it is well beyond what Samsung could do with it this year.

I highly doubt it has anything to do with what it can and can't do, vs what it needs / wants to do.

There is no need for such a gfx beast unless you plan on running a high res panel.
 
I highly doubt it has anything to do with what it can and can't do, vs what it needs / wants to do.

There is no need for such a gfx beast unless you plan on running a high res panel.
Hence, this thread is titled "2013 High-End GPU War"... :p
 
I can see all the gpu's coming out next being overkill with developers playing catchup. Not sure how much advantage we will see with such increases being promised on our phones.
 
The Mali failed to score a mention here.

sneaky.
because no high-end SoC being released this year is using it that we know of. How sneaky is that? The Exynos 5 Octa was rumored to be using it, but it either didn't make deadline, the performance is just not good enough, or the rumors weren't ever true. Are you being serious or are you just trolling?
 
I can see all the gpu's coming out next being overkill with developers playing catchup. Not sure how much advantage we will see with such increases being promised on our phones.

That's been the case practically every generation.

When the SGS1 family came out, its GPU was FAR, FAR ahead of the competition. It was three times as powerful as the iPhone 4's (although it "only" translated to literally twice the framerates on pretty much every single benchmark and game, and even that was likely due to a soft frame rate limit of 60fps which could be disabled), and the iPhone 4's GPU was significantly more powerful than the next best competitors.


Guess how many games/apps made use of that? Zero.

By the time there were apps that could use it, the market had moved onto dual-core phones with even more powerful GPUs.

SGS1 was the last time Android really had a GPU that outshined Apple's best for a significant, undeniable time as far as I know. And you know how many people miss those days? About zero. Developers don't give a shit. Most users don't give a shit. Hell, I don't even give a shit anymore. The iPhone 5's GPU is still pretty amazing, and practically no one talks about it.

Phones just don't have those killer apps like PC gaming did back in the day. No one has the time or inclination to develop bleeding edge.
 
The problem is that phone apps are all about volume and you push volume by making it work on anything, even apples app store you gotta support all the way back 2 or 3 generations at least for a game.

So low and behold phones have the same problem that PCs have had for decades the people who fall behind or stick to low end intregrated graphics are holding back devs. Well that and how much can you really appreciate on a small phones screen, Especially and ultra tiny one from apple.
 
Wait. If x86 applications have to be rewritten for ARM on Windows RT, does that mean android applications have to be rewritten for x86 mobile processors? Kind of like the one in the Droid Razr M?
 
they need to be at the least recompiled. And if things are coded in certain ways to get extra performance for ARM they wont be efficient on x86. Alot of people argued that other architectures were more efficient than x86, but x86 programs just worked best because the developers know the best way to code for x86, now the tables turn and x86 is in the stop of not having the programming support in the mobile apps.

Also I think that google should really start pushing docking your phone. Thats should be a huge initiative from them. For google it would be a win all the way around. They start eating into MS desktop monopoly and apples presense. It would also encourage android devs to start building games that are more demanding because on a larger desktop monitor, with a keyboard and mouse people could appreciate games more.
 
We've gotten to the point where GPU performance doesn't matter so long as the GPU in the device can push a consistent 60fps within the UI at the device's native resolution. As has been mentioned already, games are made for the lowest common denominator, with only a (relatively) few games being made for mid-range devices. With that said, 2013's GPUs;

Qualcomm's Adreno 320 was the first out of the gate (technically 2012). It has the appearance of being a step up in performance over the previous winners, but it helps that it's bolstered by the fastest mobile CPU available today. As I mentioned before, a direct comparison to older CPUs is tougher to do because you can't separate GPU from CPU in these devices. It would be like comparing a GeForce GTX 670 w/i7 3770 vs. a GTX 680 w/Core2Duo T6600. Of course the 670 would look faster in benchmarks. With that said, when I analyzed all of the benchmarks, it seemed that the Adreno was comparable to the Mali 400 in fill-rate performance (CPU limitations on the older chipsets aside), and handily beat it in geometric performance. It's hard to tell if it was truly a beast or if even the Adreno was still CPU limited in some tests. The Adreno did lose to the Mali in some fill-rate based tests.

Next up is Nvidia's Tegra 4 utilizing their 4th generation GeForce ULP. The first three generations were based on NV3x, but this new version if partially Kepler-based. It's advertised as being six-times faster than the ULP in Tegra 3. Of course, Tegra 3's GPU was pretty slow. I suspect that this GPU will be comparable to or slightly faster than the Adreno 320, but give it a few months and we should see some competent benchmarks.

ARM's Mali T-604 and T-658 are next up. The T-604 is featured in the Exynos dual, while the T-658 has been confirmed for the Exynos quad. You can ignore those Exynos Octa reports for now. Samsung has a habit of announcing their chips 12-18 months before they end up in a shipping product (Exynos dual was announced BEFORE the Galaxy S3 was ever announced, so some speculated that the S3 would have it). ARM stated that Big.Little would feature prominently in 2014, so we're likely a year out from that. Exynos quad (and T-658) should power the Galaxy S4. With that said, T-604 seemed pretty competent, keeping pace with the Adreno 320 in most benchmarks despite being paired with a slower CPU. That's pretty impressive. T-658 is just a T-604 doubled (604 is quad-core, 658 is octa). So performance on that should be the best that we've discussed so far.

But, my guess for the winner is pretty much the same as it's always been, Imagination's PowerVR. Their series 6 Rogue is supposed to be utilized in up to three chipsets this year. It's been confirmed for Sony's upcoming Novathor, it will likely be in the next Apple SOC, and it was expected to be in OMAP 5, which is highly unlikely to release but still a dark horse. I don't see any mobile GPU in 2013 touching PowerVR series 6....but as I began this post, will it even matter?
 
That's been the case practically every generation.

When the SGS1 family came out, its GPU was FAR, FAR ahead of the competition. It was three times as powerful as the iPhone 4's (although it "only" translated to literally twice the framerates on pretty much every single benchmark and game, and even that was likely due to a soft frame rate limit of 60fps which could be disabled), and the iPhone 4's GPU was significantly more powerful than the next best competitors.


Guess how many games/apps made use of that? Zero.

By the time there were apps that could use it, the market had moved onto dual-core phones with even more powerful GPUs.

SGS1 was the last time Android really had a GPU that outshined Apple's best for a significant, undeniable time as far as I know. And you know how many people miss those days? About zero. Developers don't give a shit. Most users don't give a shit. Hell, I don't even give a shit anymore. The iPhone 5's GPU is still pretty amazing, and practically no one talks about it.

Phones just don't have those killer apps like PC gaming did back in the day. No one has the time or inclination to develop bleeding edge.

I was kind of hinting at that. To me too little pressure is put on better increasing CPU output. Most see it as "enough" and want to put more into the GPU side but I think we have already hit that peak. Until these phones can do some actual no-joke work on them they will be novelty devices with little real work ability. Lets not start shoving more cores down developers necks (which 90% will ignore) and lets get those dual cores to where our desktop dual cores got.
 
I was kind of hinting at that. To me too little pressure is put on better increasing CPU output. Most see it as "enough" and want to put more into the GPU side but I think we have already hit that peak. Until these phones can do some actual no-joke work on them they will be novelty devices with little real work ability. Lets not start shoving more cores down developers necks (which 90% will ignore) and lets get those dual cores to where our desktop dual cores got.

But then we're talking CPUs, and going off topic. With respect to the original intent of this thread, we're talking about GPUs and their capabilities into 2013.

With that said, I suspect that the primary development with GPUs will be in the compute side. Traditionally, we think of CUDA (Nvidia proprietary) or OpenCL. Android actually uses a different means, called Renderscript, and version 4.2 saw a meaningful update to this. Wikipedia isn't the best source, but it has some good links backing up the article.

http://en.wikipedia.org/wiki/Renderscript

I'm not certain if a GPU needs to be OpenCL certified to support the new features of Renderscript. However, it is reasonable to assume that if a GPU is designed to have higher computer performance, then it should perform better with Renderscript. If that is the case, GPUs designed with computer performance in mind (Mali T6xx) will be ideal moving forward. Google hinted at the GPU being the reason that they chose Exynos 5 dual over S4 Pro (Krait quad) for the Nexus 10, despite a quad-core Krait being faster than a dual-core A15 in most cases.
 
That's been the case practically every generation.

When the SGS1 family came out, its GPU was FAR, FAR ahead of the competition. It was three times as powerful as the iPhone 4's (although it "only" translated to literally twice the framerates on pretty much every single benchmark and game, and even that was likely due to a soft frame rate limit of 60fps which could be disabled), and the iPhone 4's GPU was significantly more powerful than the next best competitors.


Guess how many games/apps made use of that? Zero.

By the time there were apps that could use it, the market had moved onto dual-core phones with even more powerful GPUs.

SGS1 was the last time Android really had a GPU that outshined Apple's best for a significant, undeniable time as far as I know. And you know how many people miss those days? About zero. Developers don't give a shit. Most users don't give a shit. Hell, I don't even give a shit anymore. The iPhone 5's GPU is still pretty amazing, and practically no one talks about it.

Phones just don't have those killer apps like PC gaming did back in the day. No one has the time or inclination to develop bleeding edge.

Pretty much my sentiment on the subject too. Current GPUs drive the OS UI perfectly fine now and 98% of the games don't fully utilize them anyways and by the time they will, you'll be due for a phone upgrade so you'll be back to using an under-utilized GPU in your phone again, hah.

/first world problems
 
But then we're talking CPUs, and going off topic. With respect to the original intent of this thread, we're talking about GPUs and their capabilities into 2013.

With that said, I suspect that the primary development with GPUs will be in the compute side. Traditionally, we think of CUDA (Nvidia proprietary) or OpenCL. Android actually uses a different means, called Renderscript, and version 4.2 saw a meaningful update to this. Wikipedia isn't the best source, but it has some good links backing up the article.

http://en.wikipedia.org/wiki/Renderscript

I'm not certain if a GPU needs to be OpenCL certified to support the new features of Renderscript. However, it is reasonable to assume that if a GPU is designed to have higher computer performance, then it should perform better with Renderscript. If that is the case, GPUs designed with computer performance in mind (Mali T6xx) will be ideal moving forward. Google hinted at the GPU being the reason that they chose Exynos 5 dual over S4 Pro (Krait quad) for the Nexus 10, despite a quad-core Krait being faster than a dual-core A15 in most cases.
did you even read your own source? It is has been deprecated since 4.1. Oh, hey, it's Medion. Figures...
 
Pretty much my sentiment on the subject too. Current GPUs drive the OS UI perfectly fine now and 98% of the games don't fully utilize them anyways and by the time they will, you'll be due for a phone upgrade so you'll be back to using an under-utilized GPU in your phone again, hah.

/first world problems
I choose to look at this over a bigger picture. Phones should be able to dock and power larger displays. Phones should be able to become a desktop replacement. And with Ouya, Project Shield, and others, why couldn't Android power a fully loaded gaming console; and why can't our game/media be our phones?

This year's gpu should be more powerful than those in the Wii U, XBOX 360, and PS3.

If you consider the CPU benchmark predictions of Intel's Bay Trail, I don't see why we can't play WoW on a phone later this year provided that its Gen7 gpu will be strong enough.

On the subject of Mali:
TI has exited the SoC business. And Exynos 5 Octa will not come with Mali. From preliminary benchmark of the new Mali, it is not leaps and bounds better than it's predecessor. Hence, the rumors of it sucking being the reason it isn't in the octa lends few credibility; the PowerVR Samsung implemented will suck just as much.

As for the rumored SoC gpu benchmark ratings goes:
1. Apple A7 (or A6XX)
2. NVidia Tegra 4
3. Apple A6X
4. Qualcomm Snapdragon 800
And lots of space in between...
5. Samsung Exynos 5 Octa
6. Qualcomm Snapdragon 600

And unknown position is Intel Gen7; we know intel got x86 power consumption down, but how their gpu power consumption goes is unknown. But if Intel is willing to drop PowerVR, it's probably safe to assume that it'll be better than the one currently implemented I'm the Clover Trail+.
 
because no high-end SoC being released this year is using it that we know of. How sneaky is that? The Exynos 5 Octa was rumored to be using it, but it either didn't make deadline, the performance is just not good enough, or the rumors weren't ever true. Are you being serious or are you just trolling?

I wasn't being serious.

But I wasn't trolling either.

I think the latest Mali will be able to at least hang with the listed crew. I might be wrong, and who knows what product it'll ship in. But it's another player.

did you even read your own source? It is has been deprecated since 4.1. Oh, hey, it's Medion. Figures...

No. The 3D API was deprecated in 4.1. All 3D stuff should be moved over to OpenGL ES.
 
Stop trolling.

From the very link that you criticized the dude for "not reading":

"It can also be used for 3D graphics, but this API has been deprecated as of Android 4.1."

"As of Android 4.1, Renderscipt's experimental 3D rendering API has been deprecated"


So maybe you can help us out... what, exactly, has been deprecated in Android 4.1?
 
From the very link that you criticized the dude for "not reading":

"It can also be used for 3D graphics, but this API has been deprecated as of Android 4.1."

"As of Android 4.1, Renderscipt's experimental 3D rendering API has been deprecated"


So maybe you can help us out... what, exactly, has been deprecated in Android 4.1?

Your size of text broke my phone :D
 
Does Apple own PowerVR? they implement the Graphics core into their SOC but I don' think they own it, where as nVidia and Qualcomm do own their Gfx tech.

Imagination who makes the PowerVR is a publicly traded company. No one person or company owns it.
Apple does hold shares in the company as well as Intel.
 
Wait. If x86 applications have to be rewritten for ARM on Windows RT, does that mean android applications have to be rewritten for x86 mobile processors? Kind of like the one in the Droid Razr M?

That would be the Droid Razr i not M

Basic apps don't need to be recompile since they would have been written in java and run in Dalvik VM. More system intensive apps like games would have to be though since I don't believe they are written in Java but C++.
 
I think the latest Mali will be able to at least hang with the listed crew. I might be wrong, and who knows what product it'll ship in. But it's another player.

You're not wrong. As I provided in another example; if you pair a GTX 670 with an i7 3770, and then benchmark it against a GTX 680 paired with a Core2Duo T6600, which GPU is faster? The 670 would appear faster in benchmarks, to which an idiot would assume that the 670 is faster. We do have people like that on these boards, unfortunately.

Snpadragon S4 (Krait, quad core) is rated at 3.3 DMIPS per clock. Exynos 5 dual (Cortex A15) is rated at 3.5. Assuming that both are 1.5ghz, you're going to get 4,950 DMIPS per core and 19,800 total out of the S4 Pro, but 5,250 DMIPS per core and 10,500 total out of the Exynos 5 dual. So, that tells you right there that, aside from very specific situations, the quad-core Krait has significantly more muscle behind it. And yet, in benchmarks, Exynos 5 dual and S4 Pro came out very close to each other in most benchmarks. Does that mean that the GPUs are even? Or perhaps one is more CPU bottle necked than the other? To assume one way or the other with good reason is acceptable, but to state as fact that one is garbage, or to that affect, would be just plain ignorant.

The 3D API was deprecated in 4.1. All 3D stuff should be moved over to OpenGL ES.

Also correct. Renderscript debuted with Honeycomb as essentially GPU compute in reverse. It allows the CPU to work in tandem with the GPU more directly to handle graphics. At the time that Honeycomb was in development, the Adreno 200 and 205 were the most common GPUs in Android devices and they were WEAK. Fast forward to today and CPUs aren't advancing nearly as fast as mobile GPUs. So, we saw the 3D API in Renderscript deprecated as the GPU can handle that itself. Android 4.2 brought about the ability to use Renderscript as a means to perform compute oriented tasks on the GPU itself. I'd still prefer OpenCL support, but I'll take Renderscript over CUDA any day.
 
Seriously, a cpu with gpu does not exist in the mobile SoC world. The whole SoC must be considered together for its gpu performance.

Second, the Exynos 5 Dual in the Nexus 10 has already been tested with external 1080p displays. On those tests, the new Mali is not even close to the A6X. And the last I checked, the cpu performance on it is way better than the S4.

So I don't know how this Core 2 Duo with GTX680 applies. If anything, it's a Core i5 with Intel HD4000 vs a Core 2 Quad with GTX650.

Seriously, the analogies have to make sense.
 
Have to say that I don't see the status quo changing too much here, although Qualcomm at last seems to have acknowledged that graphics are important (as of the Adreno 320 and later).

Apple may come out in front simply because it has a track record of prioritizing graphics and living up to expectations. I still find it fascinating that the Exynos 5 Dual is the first mainstream Cortex-A15 chip (that I know of), but gets saddled with aging graphics -- the Exynos 5 Octa also sounds like it'll use video slower than that of the A6X, so it might not hold up against an A7. NVIDIA? It's historically long on talk and short on walk, so I'm not counting on Tegra 4 being as fast in practice. Qualcomm's material is just too far away to gauge as of yet.

I suppose we'll have a better sense of things at Mobile World Congress, when HTC presumably shows its 2013 lineup and other companies feel more comfortable with revealing next-gen phones and tablets (ASUS and LG are foremost of mind).
 
I still find it fascinating that the Exynos 5 Dual is the first mainstream Cortex-A15 chip (that I know of), but gets saddled with aging graphics

The ARM Mali T604 is a significant upgrade from the Mali 400MP used in previous devices. The Exynos 5 dual currently uses the T604-MP4 variant, while the upcoming Exynos quad uses the T658, which is literally the same as what a T604-MP8 would be. Same chip, double the cores.

http://www.anandtech.com/show/6425/google-nexus-4-and-nexus-10-review/2

The problem that I have with Mali today is the same as the problem that I had with Mali 400 in the first place - insanely high fill-rate (compared to all non-PowerVR competitors) but lower geometric throughput than what I'd like to see for gaming. To ARM's credit, they did better with the ratio this time. The difference with the T604 isn't as bad as it was with the 400MP. In those benchmarks, the 400MP shows fill-rate within striking distance of Adreno 320, but geometric performance roughly one-third of ye olde SGX540.

the Exynos 5 Octa also sounds like it'll use video slower than that of the A6X

I wouldn't assume that just yet. Samsung has a habit of announcing the CPU used in the SOC a year or more before the SOC is released, but doesn't announce the GPU until the last minute. When Exynos 4412 (the quad-core used in the SGS3 and SGN2) was announced, there were rumors that it would feature the T604 due to Samsung's claims on the improved GPU performance. Instead, it was just a Mali 400MP seriously overclocked (see the SGS2 and SGS3 benchmarks from the previous link). When Exynos 5 was first announced, rumors swirled that Samsung was going back to PowerVR due to high order volume from Imagination. Those orders went into Apple's A-series, as they always do. At this point, anything on what we'll see on Exynos 5 octa is pure speculation. Everyone is linking to the same unconfirmed report by Anandtech. Sure, it could come out as true, but it's too early to tell. I fell for these reports the last two times.
 
I choose to look at this over a bigger picture. Phones should be able to dock and power larger displays. Phones should be able to become a desktop replacement. And with Ouya, Project Shield, and others, why couldn't Android power a fully loaded gaming console; and why can't our game/media be our phones?

This year's gpu should be more powerful than those in the Wii U, XBOX 360, and PS3.

If you consider the CPU benchmark predictions of Intel's Bay Trail, I don't see why we can't play WoW on a phone later this year provided that its Gen7 gpu will be strong enough.

It is always a possibility but there is a reason why PC still stays on after the console invasion. Besides, won't playing games on a phone makes you a sicko if you get what I mean. How cool do you find people who are always looking down at their phone when you are chatting with them and this is coming from people who have never ever played a PC or console games before the iphone 4.
 
I can see all the gpu's coming out next being overkill with developers playing catchup. Not sure how much advantage we will see with such increases being promised on our phones.

Like modern processors for desktops?
 
Call me crazy if you want but I could care less about playing a game or watching movies on a tiny phone. Now one day if they come out with a powerful enough gpu/cpu and I can connect a mouse, keyboard, and to a tv or monitor then I would be interested.
 
Phones can already display 1080p and connect to your TV. Some can do it wirelessly and almost any modern android phone can do it over HDMI. All you need is the right wires. They can also accept blue tooth control for keyboards / mice and some have nice docks that accept USB. Really we are already there the consumer is just not aware of it because most phone makers have a conflict of interest. They would like to sell you more devices not less, they make more money that way.
 
Back
Top