Over the past few years, web developers have increasingly focused on the front end, moving more and more critical application logic from the server to the browser. I've been pretty skeptical of this shift, for a variety of reasons. Browsers still vary widely in capability, so you can only really use the lowest common denominator of your target audience. If your target audience is young, wealthy Americans, you can safely assume they have very capable browsers. And that's great for the many startups trying to reach that market. But if you're building something that will be used by people in the many developing regions of the world, as we often are at Aten, you simply can't put much processing into the browser and expect it to work.
Bandwidth is also a big problem. Making decisions in a browser requires having the relevant data loaded in the browser. As an example, let's say your app is a simple text search of a book. The input is a single word and the output is "yes" or "no" depending on if that word appears in the book. When that app is on the server, the only data you're transferring back and forth is the input and the output, two words. But if you move the application logic to the browser, you now have to also transfer the entire book to the browser. That would be ridiculous, right?
That's what I thought, before I read an article by Tom MacWright, titled Indexing and Searching Big Static Data. You might assume from that title, as I initially did, that he would talk about indexing and searching on a server. But what he described in that article is pretty much exactly the browser-based search I just described as ridiculous. It's actually a little more ridiculous:
So you scrape the data overnight. It’s 685MB of .doc files, 92MB when converted to text. Too big for browsers. Let’s do it anyway.
Tom went on to describe how you really could strain the capability of a modern browser to do a text search on enough text to fill a book. He skipped entirely over the "why would I do this when I have a perfectly good server?" question and went straight into looking at how it could be done. And while reading this, I realized something important: this is where we're going. I've seen people make many arguments for browser-side applications before, but I think Tom takes it as obvious, so he didn't even bother making that argument. Instead, he just painted an entirely plausible picture of what this future might look like. And it changed my mind.
All of the concerns about client-side application logic I described above are still true to today. But they're much less true today than 15 years ago, and all signs point to these concerns becoming completely irrelevant over the next 15 years. There will be a time when running the equivalent of Google entirely in a browser is not only not ridiculous, but entirely common. And we're close enough to that time already that we can and should start thinking about what it will look like. Google launched on a dual-core 200MHz processor and 256 MB of RAM. The latest iPhone has over 4 times that processing power. There's no reason to believe this trend won't continue.
I'm not personally going to start building a search engine that runs entirely in the browser. I'm not even going to go full-speed ahead into moving all my applications' logic from the server to the browser. But I am going to drop my skepticism of browser-based applications, and start treating this as the future of the web. I'm going to stop thinking about whether this should happen and start thinking more seriously about how it will happen. If you're not already there, I invite you to join me.
Photo used under Creative Commons license from Graeme Newcomb