HTTP

Something occurred to me today. I'm working on an application that, unfortunately, consists of about 2MB of JavaScript code that needs to be downloaded to the web browser. Each time the JavaScript is modified a little bit, and the page refreshed, the web browser has to completely re-download that JavaScript file. And because there are 50+ JavaScript files, my understanding is that the web browser has to make an HTTP GET request for each one.

Idea #1: My experience with the 'rsync' utility has been that it is a phenominal piece of technology. Why not use it in conjunction with HTTP so that only the parts of a file that have changed are re-downloaded?

Idea #2: Making 50 HTTP GET requests when the server replies back for each one that the file hasn't changed is quite slow. Why not introduce a "MULTIGET" request where each file that needs to be gotten could be listed within the same request. The server would then send back a two-part response. The first part would list all of the files that had not changed since last gotten, while the second part would be the rsync differences of each file that had.

That could be an order of a magnitude faster in some cases... I'm tempted to simulate this to see just how much faster it would be.