Very lately, Google made announcements that have rightfully excited the web developers’ community; but after talking around, I feel that not everyone realizes how those moves may entirely redefine how web development is done. So, I decided to summarize what’s going on in this post in a concise, straight-to-the-point way.

Note that this scenario depends on the popular adoption of what Google just announced, and of course, strictly represents my opinion only.

Recent events

  • On May 20th, Google Chrome 35 got released with a fully working implementation of Object.observe (here is more information).
  • On May 23rd, Google Search revealed that they’ve been interpreting JS during their crawling (here is the announcement).

Explaining the Object.observe stuff

In a nutshell, there are two approaches for how the model tier of front-end MVC frameworks work:

  • Backbone and Ember require that you create a model by extending a model class they provide (var Product = Backbone.Model.extend(); var product = new Product(); …), and that you use get() and set() methods when you want to access and change stuff in your model. This allows for the get() and set() methods to not only change stuff, but also to let the view know about what changed when it does.
  • Angular allows to use plain old JavaScript objects (var product = { name : “Chair”, price : 50 };). When you change one of those model objects, your views “magically” update.

Now, obviously most people would agree that Angular models are more efficient to develop, and more pleasant to work with, but this “magic” comes with a huge trade-off, which is called ” dirty checking”, and is indeed very dirty.
Some people explain it at length better than I would, but the idea is that plain old objects don’t have logic, and therefore can’t alert your views by themselves when they’re changing. So Angular needs to continuously check if something in there just changed, and compare the current state of all model objects to their previous states to find out what actually changed, one by one. Dirty checking is indeed dirty, and quite slow.

Detractors of Angular have been saying that Angular couldn’t be the way of the future, as dirty checking is indeed very dirty, and their opinion has been making sense. It turns out they were probably wrong after all, as Chrome now fully implements Object.observe, which means you don’t need dirty checking anymore to be alerted when something changes in a plain old JS object, and therefore you can use plain old objects as model objects with no trade-off.

Now, this is Chrome-only today, but if this gets implemented by other browsers and become fully usable in production, then Backbone.js and Ember.js model tiers as we know them today will simply be suddenly obsolete.

Explaining JS interpretation during Google’s crawling

The “front-end MVC” architecture has been coming with a major downside : Google didn’t interpret JS, and therefore those sites were very hard (if not impossible) to get indexed without having to create another, more “static” site. The issue in a nutshell is that a single-page app is most likely to be a blank page that gets modified by JS; so without the JS, Google sees the blank page only.

I’m aware of the other downsides, such as accessibility or performance, but I don’t consider them as major since I believe stuff can be done to make them better; also, there’s fault-tolerance, but I don’t believe it will be as hard a problem in the future (I’ll get back to that). All in all: with a single-page app website that was well-conceived, the only unavoidable dealbreaker was SEO, which made them suitable only for cases that don’t need SEO (authenticated apps, …).

Google’s announcement means that this is simply not a problem anymore for a website’s Google indexation, as their crawler knows how to execute JS, and to see pages just like you do, even front-end MVC architectures. And it won’t do it “tomorrow maybe”, it’s doing it today, actually has been doing it for a bit of time already. Dealbreaker gone.

First, what this won’t change

You will still need back-end stuff. As long as you need to persist data and perform data transactions that impact multiple users or devices, full-front-end is not an option.
Performance is another downside of front-end apps today, and as much as we can still make this way better (more on this later), average front-end performance won’t be able to match average back-end performance. It certainly can get very, very close, but the JS code will always have to be sent and interpreted, no matter what (there are heuristics to make it a lot better in some cases, though).

What this would change

Front-end would have to handle more complexity, and I believe this is probably a good thing in the long run. I believe we’ve only been scratching the surface of what can be done with front-end MVC frameworks, and there’s still much to get done to make them better, and make them cover more use cases (more on this later).

Also, back-end would therefore get more simple, covering less of the business logic and general role of the application. My guess is that most back-end stuff would probably be made of 2 kinds of approaches:

  • RESTful resource-based databases, and we already have many options among existing frameworks. Only last week, the awesome people at StrongLoop released LoopBack, a framework dedicated to doing just that in a very simple way, based on MongoDB and Node.js (which is always a good choice for stateless small transactions with not much business logic).
  • API cloud services that serve various simple purposes and can be integrated jointly for a given project, for instance: Firebase for real-time communication, Parse for an advanced cloud database, Moltin for e-commerce, or (full disclosure: that’s the product I work for) prismic.io for content management. This is a really, really exciting time to be working in API-based products.

What would have to get better

There are some weaknesses in current front-end MVC frameworks to build complex apps, but I think that now that SEO is gone, all of them can be made way better. Remember that front-end MVC has been reserved for certain kinds of apps so far, but now that they are a thousand times more relevant, it will just make sense to better address the issues for complex apps. I’m pretty sure I don’t have all of them in mind, but here are at least my two personal favorites.

Performance is king of issues today; for instance, to load one webpage, you will have to load the code for every controller in your website (this is true for Angular, and as far as I know, this is true for Backbone and Ember too); the bigger your application, the more this becomes a problem. Angular’s modular approach is pretty solid, but still requires the code for all modules to be available and loaded at all times. However, I don’t see what the problem would be if this approach was changed so that we can lazy-load the JS code of modules that may not be used right now. This way or another way, it’s safe to say that this can be made way better.

Fault-tolerance is annoying with JavaScript, as, in real life, a single issue can make your whole JavaScript execution go to hell. Back when I was working with my friend Jérémie Patonnier, he would always say that you give too much trust to JS when you make it do something you could do in the back-end, because you never know where it’s gonna break. This is even more true when the application becomes more complex, and more bugs might happen. But the language already contains stuff that can be used to safeguard your JS execution; they’re just not very well used today, and today’s MVC frameworks don’t seem to be very focused on solving this. But it could definitely be made better, and JS can definitely be made more resilient with the right tools.

Should we throw away Backbone and Ember right now? Is Angular the king and only winner?

No, and no.

If Object.observe does become a thing, and other browsers implement it seriously, then Backbone and Ember’s approach to model won’t make sense anymore, alright. If this happens, then they both will obviously implement models with plain old objects at some point, and even though I don’t know the amount of work this represents to make it happen, I don’t see why they wouldn’t.

EDIT: Paul Chavard, who contributes on Ember.js, got in touch with me on Twitter, to let me know that POJO are indeed on the table, but that something else is in the way: handling “unknownProperty” and easy inheritance / mixins. It looks like it will be solved by ES6 Proxies. (Tweets are here and here.)

Also, Angular comes with that approach for models already today, which does make them earlier than anyone to the game; but they also come with other concepts and values that one may see fit or not for their need (in the data binding, the modular approach, the dependency injection, etc.). To address the huge spectrum of needs, web developers will need choice, and I’m pretty sure in time we will have a lot of choices, whether from revised versions of existing frameworks, or from new ones that will happen over the next few months/years built from other concepts and values, or with specific use cases in mind.

So, what now?

My advice to be prepared: if you don’t know an MVC framework, learn AngularJS, or another MVC framework you see fit. With Angular in particular, I found that the learning curve seemed steep at first, but it becomes fun very quickly, and is really, utterly powerful.

Then, if you need to store data, look into LoopBack, or Parse, or how to make it happen with the back-end framework you know. If you need to have real-time communication (chat, whiteboard, …), look into Firebase. If you need e-commerce, look into Moltin. And of course, if you need content, create a prismic.io account today!