MIT Code Makes Web Pages Load 34% Faster

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory have developed a system that allows web pages to load 34 percent faster by overlapping the downloading of a page's objects, so that it requires less time to load.

“It can take up to 100 milliseconds each time a browser has to cross a mobile network to fetch a piece of data,” says PhD student Ravi Netravali, who is first author on a paper about Polaris that he will present at this week’s USENIX Symposium on Networked Systems Design and Implementation (NSDI '16). “As pages increase in complexity, they often require multiple trips that create delays that really add up. Our approach minimizes the number of round trips so that we can substantially speed up a page’s load-time.”
 
From what I could understand it sounds cool.
I think it would be adopted fast.
 
Perhaps I am not entirely seeing how they are accomplishing this increase, but I think we could come up with way for the server to make 1 compressed (zipped) file of all of the pages dependencies and deliver that to the browser in one request. There may still be some content that needs to be loaded dynamically, yes, but I think it could be minimized.
 
Perhaps I am not entirely seeing how they are accomplishing this increase, but I think we could come up with way for the server to make 1 compressed (zipped) file of all of the pages dependencies and deliver that to the browser in one request. There may still be some content that needs to be loaded dynamically, yes, but I think it could be minimized.

So basically, the Polaris package keeps track of dependencies.

It sounds like the Javascript will then read the dependency list and have the browser download all the dependencies in one go instead of going back and forth to fetch everything individually.

Pretty simple technique if you ask me.

Celeryman - that approach would work for static pages pretty well, but page loading would end up highly depending on how fast the client computer could uncompress the file. Also, unless something has changed, I am pretty sure that a lot of stuff is compressed on the fly server side and then decompressed client side in order to save bandwidth.

Actually, looking it up, you can serve uncompressed and compressed at the same time so it will work with browsers that do/do not support compression.

And for static pages, you can do pre-compression server side.

How To Optimize Your Site With GZIP Compression | BetterExplained
 
So basically, the Polaris package keeps track of dependencies.

It sounds like the Javascript will then read the dependency list and have the browser download all the dependencies in one go instead of going back and forth to fetch everything individually.

Pretty simple technique if you ask me.

Celeryman - that approach would work for static pages pretty well, but page loading would end up highly depending on how fast the client computer could uncompress the file. Also, unless something has changed, I am pretty sure that a lot of stuff is compressed on the fly server side and then decompressed client side in order to save bandwidth.

Actually, looking it up, you can serve uncompressed and compressed at the same time so it will work with browsers that do/do not support compression.

And for static pages, you can do pre-compression server side.

How To Optimize Your Site With GZIP Compression | BetterExplained

Yes, I use GZIP compression on all my sites and it does do compression server side to save bandwidth. So, their technique is having more javascript on the client side and trying to fetch all dependencies at once. I wonder if the functionality was built/compiled into the browser and the list was delivered directly to the browser instead of using javascript as a middle man, how much would it speed this same method up?
 
I could have swore there used to be a setting in your web browser that would enable simultaneous downloads. The only problem is that not all web servers supported it, and in that case, it would only download part of the webpage.
 
The million dollar question is......will they patent it?

I do not agree with software patents.
 
So they can load pages with even MORE ads and make it take the same amount of time!

giphy.gif
 
I could have swore there used to be a setting in your web browser that would enable simultaneous downloads. The only problem is that not all web servers supported it, and in that case, it would only download part of the webpage.

This was my first thought as well. Old Netscape (like 3.x and 4.x) had a setting to allow downloading multiple objects at once. This seems like something that should have been going on 15 years ago.
 
This was my first thought as well. Old Netscape (like 3.x and 4.x) had a setting to allow downloading multiple objects at once. This seems like something that should have been going on 15 years ago.

This is something different. Right now do you do have concurrency in requests. However, dependencies were still discovered at load time, and are often cascading.
 
This is something different. Right now do you do have concurrency in requests. However, dependencies were still discovered at load time, and are often cascading.

Yeah. And if caching is involved, the browser may send head requests to find out if a file has changed before downloading the whole thing. You could deliver everything as a zip file or equivalent but I think you'd be resending content that's already cached which is a waste. Also if you have dynamic content that doesn't always apply it wouldn't help.

The article was pretty light on details, but what I think could work well is if when I request a website, I provide a list of cached contents (css, js, etc) that I currently have for the site with it's version information and the server could then send me a package with anything that's newer that I could then cache for next time as well and specify if some of the items I provided are no longer required. That would require modifications to both the browser and server software but would be pretty slick. And for some file types, like js and css, you could probably send diff's (like git) if the server maintained those. Could make some very small downloads. And it would still be easy to handle the first-time visitor or cache refresh by just saying you have no previous contents.
 
Back
Top