Why Android Lags

CHANG3D

Supreme [H]ardness
Joined
Jul 23, 2010
Messages
4,975
Here's a good read:

https://plus.google.com/100838276097451809262/posts/VDkV9XaJRGS

Also read Chi-Ho Kwok's posts.

The break down is Andrew Munn want the UI threads to be on the highest priority. He sees that UI responsiveness is more important than any other functionality.

Chi-Ho Kwok however believes that UI threads should be the same as any other background tasks and that's true multitasking.

And I find myself agreeing with both of them!
 
Saw that earlier. Great read and makes a lot of sense from all sides. I honestly prefer how Android does it over iOS. That's just personal preference though. With iOS it always annoyed me that if I had a lot of updates I couldn't do much with the phone because the updates would take a year if I continued doing anything.
 
I have to say that despite his explanations sounding perfectly plausible and intuitive, I don't find his "I'm an undergraduate who interned at Google" credentials as lending much confidence. Maybe he's right, but the source is too iffy for me.
 
There's also the fact that Android runs on a virtual machine... it's a sacrifice Android users have to make for freedom of choice when it comes to hardware. As a result, it'll never run quite as efficiently as iOS.

We're soon getting to the point where it won't matter... Moore's Law is holding true to ARM SoCs, and we've seen performance increase drastically over the last couple years. It's getting to the point where there's just so much processing power available that it'll be virtually impossible to distinguish performance between the two.
 
I think it pretty much already doesn't matter. My Galaxy SII has not met its match yet, and does everything I throw at it with no slowdowns or anything. Everything is pretty much instant with no discernible lag.

If you have an older device, then I see how you would notice it, but as things have progressed, with the processors they are using in these devices now, it's almost a non-issue.
 
There's also the fact that Android runs on a virtual machine... it's a sacrifice Android users have to make for freedom of choice when it comes to hardware. As a result, it'll never run quite as efficiently as iOS.

We're soon getting to the point where it won't matter... Moore's Law is holding true to ARM SoCs, and we've seen performance increase drastically over the last couple years. It's getting to the point where there's just so much processing power available that it'll be virtually impossible to distinguish performance between the two.

Runs on a VM?
 
There's also the fact that Android runs on a virtual machine... it's a sacrifice Android users have to make for freedom of choice when it comes to hardware. As a result, it'll never run quite as efficiently as iOS.

We're soon getting to the point where it won't matter... Moore's Law is holding true to ARM SoCs, and we've seen performance increase drastically over the last couple years. It's getting to the point where there's just so much processing power available that it'll be virtually impossible to distinguish performance between the two.

I think we're pretty much there with the Galaxy S2 and with ICS on the way and adding hardware acceleration to an app being one line of XML it's only going to get better.
 
I only got around to trying both iOS and Android for the first time this summer, and I've been reading about them for three years.

I tried iOS first on an 4th gen iPod Touch and one of the first things that surprised me was how many stutters and crashes I was able to spot within my first week of use. When I finally got a Nexus S two months later running 2.3.5 it wasn't any better or worse in that regard. Around the same time I got to compare it with a coworker's iPhone 4, and aside from the nicer design and materials (Nexus S is damn cheap looking), my impressions of the software didn't improve.

Take my opinion for what its worth, but I think that even if Android does have some sort of structural difference which makes it appear slower and less fluid, the differences are not going to be noticeable to the average person - assuming high end comparable phones like the SGS2 and Nexus Prime.

Where people are going to notice a difference is all those low end Android phones which are sold to people on a budget. And what's worse, plenty of OEMs still skin and add bloatware on top. It's basically a repeat of the Mac vs PC debate in the mobile market: tons of cheap hardware to compare against, which tarnishes perceptions.
 

... for a good laugh, anyway. The technical merits are nil, the ideas are worthless, and he doesn't know shit about anything including how iOS works. If you read comments from actual iOS developers, you'll see it is *nothing* like how he thinks it is. Also if you have an iOS device, things don't stop just because your finger touched the screen. Apps will continue to install, and the progress will continue to update (tested on an iPad 2 anyway)

Btw, Dianne Hackbod posted more on this topic (she actually works on the core of Android and knows what she is talking about): https://plus.google.com/105051985738280261832/posts/XAZ4CeVP6DC

And for those that haven't read it here is the original post: https://plus.google.com/105051985738280261832/posts/2FXDCz8x93s
 
.Also if you have an iOS device, things don't stop just because your finger touched the screen.

Clicking on a link, touching the screen, and not lifting your finger will stop some pages from loading on an iP4. I thought everyone knew this. Catch it just right and the progress bar will start, but never continue.
 
I have to say that despite his explanations sounding perfectly plausible and intuitive, I don't find his "I'm an undergraduate who interned at Google" credentials as lending much confidence. Maybe he's right, but the source is too iffy for me.

He sounds like a complete noobie to me and when he spoke about the Tegra 2 and its lack of Neon have anything to do with UI slowdown, I was like that's it I am done reading.
Dianne Hackborn even addressed this comment.




Android does NOT run in a vm. It's apps do.
 
Last edited:
He sounds like a complete noobie to me and when he spoke about the Tegra 2 and its lack of Neon have anything to do with UI slowdown, I was like that's it I am done reading.
Chi-Ho Kwok even addressed this comment.

The lack of NEON in Tegra 2 *did* hurt UI performance. NEON is used by the rendering code if available, not having it did have an impact. Not a huge one, no, but an impact nonetheless, and it all adds up. Btw, this is what Dianne had to say about Tegra 2 (and how it sucks):

There is still a limit to how much the GPU can do. A recent interesting example of this is tablets built with Tegra 2 -- that GPU can touch every pixel of a 1280x800 screen about 2.5 times at 60fps. Now consider the Android 3.0 tablet home screen where you are switching to the all apps list: you need to draw the background (1x all pixels), then the layer of shortcuts and widgets (let’s be nice and say this is .5x all pixels), then the black background of all apps (1x all pixels), and the icons and labels of all apps (.5x all pixels). We’ve already blown our per-pixel budget, and we haven’t even composited the separate windows to the final display yet.

Nvidia botched Tegra 2 big time, fortunately Tegra 3 looks much, much better.

Android does NOT run in a vm. It's apps do.

No, a significant chunk of Android runs in a VM as well. Framework, activity manager, window manager, system UI, etc... all have a lot of Java code and runs in dalvik, and the native parts are called via JNI.

Clicking on a link, touching the screen, and not lifting your finger will stop some pages from loading on an iP4. I thought everyone knew this. Catch it just right and the progress bar will start, but never continue.

In the browser it does, yes, but not globally and universally. He was talking about app installation being suspended because your finger is on the screen. Android's browser does a lot of the same stuff now as well (prevents DOM updates and such while scrolling/zooming)
 
Android intern goes to intern for Microsoft. Trashes Android on the way out. Butthurt? Maybe Romain spurned his advances.
 
He sounds like a complete noobie to me and when he spoke about the Tegra 2 and its lack of Neon have anything to do with UI slowdown, I was like that's it I am done reading.
Dianne Hackborn even addressed this comment.

I may be echoing kllrnohj a bit here but it's ironic that you call him a noobie when both of your statements are incorrect. Dianne also said the following:

As device screen resolution goes up, achieving a 60fps UI is closely related to GPU speed and especially the GPU’s memory bus bandwidth. In fact, if you want to get an idea of the performance of a piece of hardware, always pay close attention to the memory bus bandwidth. There are plenty of times where the CPU (especially with those wonderful NEON instructions) can go a lot faster than the memory bus.

I've been writing about ARM SoCs for years now, and I've always said that GPU bandwidth is the primary factor limiting GPU performance in modern smartphones. (Example, check out the GPU performance section of this old Hummingbird vs Snapdragon article I wrote). Interestingly enough, while SoC CPUs have gone through several generations of architecture changes while Android has been around (ARM11, Cortex-A8, Cortex-A9, soon Cortex-A15), GPUs have changed very little. Qualcomm is using the same basic GPU architecture as AMD's Imageon division was developing back in 2008 when they were bought by Qualcomm. NVIDIA's GPU is basically a scaled-down GeForce that isn't anything particularly revolutionary, Imagination Techologies PowerVR SGX line of GPUs has been running in phones since the OG Droid over two years ago, with only a move from the 535 to the 540 last year, which is still being used in the Galaxy Nexus this year.

Why? Memory bandwidth. An SGX 540 like the one running in the Galaxy Nexus was already in production around the time of that original Droid (it was announced in November of 2007), but memory bandwidth limitations made using the GPU pointless. So here we are, 2 years later, generations of CPUs later, using the same GPU, because only now can we use it to its potential through feature size reductions (read clock speed increases) and dual-channel LPDDR2 memory.

Where am I going with all this? Everyone is pointing out that Tegra 3 has NEON and this will make a big difference. For UI rendering, a little bit, yes. For HD video playback, definitely. But Tegra 3 is lacking a dual-channel memory controller just as Tegra 2 did. While NVIDIA's justification is that even 2 Cortex-A9 cores cannot saturate the memory bandwidth of a dual-channel memory system, the real sacrifice gets made by the GPU, and it's why the Tegra 3 GPU doesn't set any records for performance. Faster clocked LPDDR2 memory keeps it competitive, but for a company that prides itself on graphics performance, it seems that with a minor change they could shatter the competition. The reason they don't do this appears to be due to a strong push by NVIDIA to have the most power-efficient SoCs on the market (as evidenced by the companion core in Tegra 3). I personally would be interested in knowing the amount of power saved by clocking memory faster instead of using a dual-channel controller.

Android does NOT run in a vm. It's apps do.

I've got a challenge for you. If you have an Android device, use an application that lets you browse the root of your phone (I like ES File Explorer) and go to /System/App. Take a look at some of the APK files in there. You're looking at a lot of the core functionality of Android in here. These are system applications, and Android will not run without many of them. If you don't believe me, try deleting them and seeing if your phone will boot! ;)
 
Last edited:
Where am I going with all this? Everyone is pointing out that Tegra 3 has NEON and this will make a big difference. For UI rendering, a little bit, yes. For HD video playback, definitely. But Tegra 3 is lacking a dual-channel memory controller just as Tegra 2 did. While NVIDIA's justification is that even 2 Cortex-A9 cores cannot saturate the memory bandwidth of a dual-channel memory system, the real sacrifice gets made by the GPU, and it's why the Tegra 3 GPU doesn't set any records for performance. Faster clocked LPDDR2 memory keeps it competitive, but for a company that prides itself on graphics performance, it seems that with a minor change they could shatter the competition. The reason they don't do this appears to be due to a strong push by NVIDIA to have the most power-efficient SoCs on the market (as evidenced by the companion core in Tegra 3). I personally would be interested in knowing the amount of power saved by clocking memory faster instead of using a dual-channel controller.

I don't think it has much to do with power but rather costs and die space. That said, Nvidia did increase the memory bandwidth in Tegra 3 a *lot*. It's something like 2.5x-3x more than Tegra 2 thanks to DDR3L support, which is more than the bump in GPU performance. It is actually possible Nvidia is correct in that memory bandwidth isn't a bottleneck in Tegra 3, at least not at current tablet resolutions.
 
I may be echoing kllrnohj a bit here but it's ironic that you call him a noobie when both of your statements are incorrect. Dianne also said the following:



I've been writing about ARM SoCs for years now, and I've always said that GPU bandwidth is the primary factor limiting GPU performance in modern smartphones. (Example, check out the GPU performance section of this old Hummingbird vs Snapdragon article I wrote). Interestingly enough, while SoC CPUs have gone through several generations of architecture changes while Android has been around (ARM11, Cortex-A8, Cortex-A9, soon Cortex-A15), GPUs have changed very little. Qualcomm is using the same basic GPU architecture as AMD's Imageon division was developing back in 2008 when they were bought by Qualcomm. NVIDIA's GPU is basically a scaled-down GeForce that isn't anything particularly revolutionary, Imagination Techologies PowerVR SGX line of GPUs has been running in phones since the OG Droid over two years ago, with only a move from the 535 to the 540 last year, which is still being used in the Galaxy Nexus this year.

Why? Memory bandwidth. An SGX 540 like the one running in the Galaxy Nexus was already in production around the time of that original Droid (it was announced in November of 2007), but memory bandwidth limitations made using the GPU pointless. So here we are, 2 years later, generations of CPUs later, using the same GPU, because only now can we use it to its potential through feature size reductions (read clock speed increases) and dual-channel LPDDR2 memory.

Where am I going with all this? Everyone is pointing out that Tegra 3 has NEON and this will make a big difference. For UI rendering, a little bit, yes. For HD video playback, definitely. But Tegra 3 is lacking a dual-channel memory controller just as Tegra 2 did. While NVIDIA's justification is that even 2 Cortex-A9 cores cannot saturate the memory bandwidth of a dual-channel memory system, the real sacrifice gets made by the GPU, and it's why the Tegra 3 GPU doesn't set any records for performance. Faster clocked LPDDR2 memory keeps it competitive, but for a company that prides itself on graphics performance, it seems that with a minor change they could shatter the competition. The reason they don't do this appears to be due to a strong push by NVIDIA to have the most power-efficient SoCs on the market (as evidenced by the companion core in Tegra 3). I personally would be interested in knowing the amount of power saved by clocking memory faster instead of using a dual-channel controller.



I've got a challenge for you. If you have an Android device, use an application that lets you browse the root of your phone (I like ES File Explorer) and go to /System/App. Take a look at some of the APK files in there. You're looking at a lot of the core functionality of Android in here. These are system applications, and Android will not run without many of them. If you don't believe me, try deleting them and seeing if your phone will boot! ;)

It does not matter how many of its subsystems run in a VM the point it the OS is not Virtualised.
 
It does not matter how many of its subsystems run in a VM the point it the OS is not Virtualised.

I think you're confusing your terminology. Nothing in Android is virtualized in the VMWare or VirtualBox sense, not the system, not the apps, nothing. There is, however, a "VM" called Dalvik that powers a *huge* part of Android, including the system. This VM is more of an abstraction level, it is not virtualization of another OS or anything like that. You cannot run Android without that VM, it is a critical part of Android itself. To say that Android does not run in a VM is wrong, it runs both in a VM and not in a VM - different parts live on different sides of the gap.

With that said, everything people consider as "Android", from the activity management to the framework to the system UI, that all mostly runs in Dalvik, or rather the processes are Dalvik ones. Think of Android more as a linear gradient. It starts all native at the lowest levels, but as you go up Java and native code intermix, and finally you end up with all Java. All of which falls under the umbrella of "Android". On top of that you then have apps.
 
A part of the hiccups in Android's UI can certainly be traced to Java. Java UIs (and in fact most other UIs as well) run in a single-threaded state. More cores on your CPU mean nothing to the UI. And while Java is a good high-level language, using it to implement lowlevel stuff that requires a lot of number or memory crunching is a bad idea. There are some forced design choices that make the language unsuitable for these tasks.

I'm not talking about the Garbage Collection; C# does have Garbage Collection and it has never been a big issue. I'm talking about the way the underlying JVM represents the data in memory. Java kills the CPU's caching mechanism by requiring that all objects be referenced indirectly, risiding fragmented over the entire memory. C# and .NET have representation of by value entities (i.e. ability to use objects without references or pointers) which results in better memory footprint and less random memory access.

But to be fair, there are a lot of apps that are just programmed poorly. Windows Phone has them too. It ain't all about the underlying system.
 
Back
Top