JavaScript Application Architecture On The Road To 2015

JavaScript Application Architecture On The Road To 2015

I once told someone I was an architect. It’s true in a way since I now have to design an intricate web of lies to back it up. On a serious note, I thought it might be salutary to look at the state of application architecture in the JavaScript community as we ebb our way towards 2015. I’ll talk about composition, functional boundaries, modularity, immutable data structures, CSP channels and a few other related topics.

Composition

On an architectural level, the way we craft large-scale applications in JavaScript has changed in at least one fundamental way in the last few years. Once you remove the minutia of machinery bringing forth unidirectional data-bindingimmutable data-structures and virtual-DOM (all of which are interesting problem spaces) the one key concept that many devs seem to have organically converged on is composition. Composition is incredibly powerful, allowing us to stitch together reusable pieces of functionality to “compose” a larger application. Composition ushers in a mindset of things being good when they’re modular, smaller and easier to test. Easier to reason with. Easier to distribute. Heck, just look at how well that works for Node.

 

JavaScript Application Architecture On The Road To 2015

A model for composition where an application is composed of smaller pieces of reusable UI which themselves are composed from extending and reusing existing modules and libraries.

Composition is one of the reasons we regularly talk about React “Components”, “Ember.Component”s, Angular directives, Polymer elements and of course, straight-up Web Components. We may argue about the frameworks and libraries surrounding these different flavours of component, but not that composition is inherently a bad thing. Note: earlier players in the JS framework game (Dojo, YUI, ExtJS) touted composition strongly and it’s been around forever but it’s taken us a while for most people to grok the true power of this model as broadly on the front-end.

Composition is one solution to the problem of application complexity. Now, the way that languages in the web platform evolve are in direct response to the pain caused by such complexity. Complexity on its own can take lots of forms, but when we look at the landscape of how developers have been building for the web over the last few years, common patterns can be one the most obvious things worth considering baking in solutions for. This is why an acknowledgement by browser vendors that Web Components are important is actually a big deal. Even if you absolutely don’t use ‘our’ flavour of them (I’ll contest they are powerful), hopefully you dig composition enough to use a solution that offers it.

Where I hope we’ll see improvements in the future are around state synchronisation (synchronising state between your components DOM and your model/server) and utilising the power of composition boundaries.

Composition Boundaries

It would be unfair to talk about composition on the web without also discussing Shadow DOM boundaries (humour me). It’s best to think about the feature as giving us true functional boundaries between widgets we’ve composed using regular old HTML. This lets us understand where one widget tree ends and another widget’s tree begins, giving us a chance to stop shit from a single widget leaking all over the page. It also allows us to scope down what CSS selectors can match such that deep widget structures can hide elements from expensive styling rules.

 

JavaScript Application Architecture On The Road To 2015

The shadow boundary is the barrier that separates the regular DOM (“light” DOM) from the shadow DOM

If you haven’t played around with Shadow DOM, it allows a browser to include a subtree of DOM elements (e.g those representing a component) into the rendering of a document, but not into the main document DOM tree of the page. This creates a composition boundary between one component and the next allowing you to reason about components as small, independent chunks without having to rely on iframes. Browsers with Shadow DOM support can move across this boundary at ease, but inside that subtree of DOM elements you can compose a component using the same div, input and select tags you use today. This also protects the rest of the page from component implementation details hidden away inside the Shadow DOM.

How does Shadow DOM compare to just using an iframe, something used since the dawn of time for separating scope and styling? Other than being horrible, iframes were designed to insert completely separate HTML documents into the middle of another, meaning by design, meaning accessing DOM elements existing in an iframe (a completely separate context) was meant to be brittle and non-trivial by default. Also think about ‘inserting’ multiple components into a page with this mechanism. You would hit a separate URL for the iframe host content in each case, littering your markup with unwieldily iframe tags that have little to no semantic value. Components using Shadow DOM for their composition boundaries are (or should be) as straight-forward to consume and modify as native HTML elements.

Playing Devil’s advocate, some may feel composition shouldn’t happen at the presentation layer (it may not be your cup of tea), others may agree it does. Let’s start off by saying you absolutely can have component composition without the use of Shadow DOM, however in order to achieve it, you end up having to be extremely disciplined about the boundaries of your components or add additional abstractions to make that happen. Shadow DOM gives you a way of doing this as a platform abstraction.

Whilst this is natively in Chrome and coming to other browsers in future, it’s exciting to see existing approaches exploring support for it — e.g React are exploring adding support for Shadow DOM and hope to evolve this work soon. Polymer already supports it and there appears to be interest from Ember and Angular in exploring this space.

Component messaging

What about communicating between components? Well if you’re working with decoupled, modular components there are a few options here.

Direct reference via component APIs is undesirable (unless working on something super simple) as this introduces direct dependencies on specific versions of other components. If their APIs vastly change, you either have to pay the upgrade price or deal with breakage. Instead, you could use a classic global or in-component event system. If you require communication between components without a parent-child relationship, events + subscription remain a popular way option. In fact, React recommends this approach, unless components do have a parent-child relationships in which case just pass props. Angular uses Services for component communication and Polymer has a few options here: custom events, change watchers and the `<core-signals>` element.

Is this the best we can do? Absolutely not. For events you’re going to fire-and-forget, the global event system model works relatively well but it becomes hairy once you start to need stateful events or chaining. As complexity grows, you may find that events interweave communication and flow control. Whilst there are many ways to improve your event system (e.g functional reactive programming) you may find yourself with events running arbitrarily large amounts of code.

What might be better than a global event system is CSP(Communicating Sequential Processes), a formalised way to describe communication in concurrent systems. CSP channels can be found in ClojureScript or Go and have been formalised in the core.async project. CSP handles coordination of your processes using message passing, blocking execution when consuming or putting from channels, making it easier to express complex async flows. The issue they solve is at some point, you may require duplex channels and rely on a stringly-typed conventional way of doing this:

thingNeedsPhoto { id: 001, uuid: “foo” }
thingPhoto { data: “../photo.png”, uuid: “foo” }

Later matching the two together. You’ve now mostly re-implemented function calls on top of a lossy, unoptimized event channel. Whenever there’s a mis-match because of a typo debugging is going to be a pain.

 

JavaScript Application Architecture On The Road To 2015

Mechanisms derived from CSP provide abstractions for pieces of work to control other pieces of work. With primitives like this, it’s easier to build cooperative mechanisms for concurrency. CSP gives you a relatively low-level set of constructs. In Functional Reactive Programming, signals represent values that change over time. Signals have a push-based interface, which is why it’s reactive. FRP provides a purely functional interface so you don’t emit events, but you do have control flow structures allowing you to define transformations over base signals. Signals in FRP can be implemented using CSP channels.

James Long and Tom Ashworth have some rock solid posts on CSP and Transducers (composable algorithmic transformations) are worth looking at if you find yourself wanting more than a global event system.

Also, do check out the js-csp project which offers a close JavaScript port of ClojureScript’s core.async that implemented Inversion-Of-Control (IOC) encapsulation using Generators (yield statements) rather than macros.

ES6 & Browserify

I’ve written in the past about the net gains of large-scale systems that take advantage of decoupling and a sane approach to JavaScript ‘modules’. I consider us in a far far better position now than we were a few years ago, no longer just making do with AMD, RequireJS and the Module pattern.

We can thank the abundance of increasingly reliable tooling around Browserify (there’s an entire set of npm modules that are browserifiable) or a plethora of transpilation-friendly ES6 features; super-useful while we wait for ES6 Modules to eventually ship in browsers.

 

JavaScript Application Architecture On The Road To 2015

es6ify by @thlorenz and@domenic result in a relatively nice pipeline for working with ES5 & ES6 with full support for source maps.

In many cases we’ve moved beyond this and it’s almost passe to frown at someone using a build-step in their authoring workflow. Plenty of JS library authors are similarly happy to use ES6 for their pre-built source.

ES6 Modules solve a plethora of issues we’ve faced in dependency and deployment, allowing us to create modules with explicit exports, import named exports from those modules and keep their names separate. They’re something of a combination between the asynchronous nature of AMD (needed in the browser) and the clarify of code you find in CommonJS, also handling circular dependencies in a better fashion.

With deps in ES6 modules being static we have a statically analyzable dependency graph which is hugely useful. They’re also significantly more clean than using CommonJS (despite Browserify workflows making it pretty pleasant to use). In order to support CommonJS in the browser context, you either wrap the module or XHR it in, wrap yourself and eval. Whilst this allows it to work in browsers, ES6 modules support both web and server use-cases more cleanly as they were designed from the bottom-up to support both.

On the native front, I’ve also been excited to see support for ES6 primitives continue to be natively explored in V8, Chakra, SpiderMonkey, and JSC. Perhaps the biggest surprise for me was IE Tech Preview shooting up to 32/44 on the ES6 compat table ahead of everyone else:

 

JavaScript Application Architecture On The Road To 2015

IE Tech Preview already has support for ES6 Classes, for…of, Maps, Sets, typed arrays, Array.prototype methods and many other features.

In case you’ve missed the V8 progress updates I’ve been posting, here are a few reminders:

Template Strings/Literals with embedded expressions enabling easier string interpolation:

 

JavaScript Application Architecture On The Road To 2015

Object Literal Extensions. Shorthand properties and methods will help us save keystrokes:

 

JavaScript Application Architecture On The Road To 2015

ES6 Classes bringing syntactical sugar over today’s objects and prototypes:

 

JavaScript Application Architecture On The Road To 2015

Whether it’s ES6 features or CommonJS modules we have sufficient tooling to gift our projects with strong compositions whether they’re client or server-side, isomorphic or otherwise. That’s kind of amazing. Don’t get me wrong, we have a long road ahead towards maturing the quality of our ecosystems but our composition story for the front-end is strong today.

Side: As we’ve already talked about Web Components, HTML Imports are worth a mention here too. JavaScript modules may not always be the bestcontainer format for components & their corresponding templates — a lot of people still use additional tooling to load and parse them.

There’s an argument to be made that as we as JS developers have an existing and somewhat mature ecosystem of tooling around scripts, moving back to HTML and having to rewrite tools to support it as a dependency mechanism can feel backwards. I see this problem in-part solved with tools like Vulcanize (for flattening imports) and hopefully going away with HTTP2.

I’m personally conflicted about how much machinery should go in pure script vs. an import and where the lines draw when we start to look at ES6 module interop. That said, HTML imports are a nice way to both package component resources but always loads scripts without blocking the parser (they still block the load event). I remain hopeful that we’ll see usage of imports, modules and interop of both systems evolve in the future.

The Offline Problem

We don’t really have a true mobile web experience if our applications don’t work offline.

There have been fundamental challenges in achieving this in the past, but things are getting better. This year, APIs in the web platform have continued to evolve in a direction giving us better primitives, most interestingly of late, Service Workers. Service Workers are an API allowing us to make sites work offline through intercepting network requests and programmatically telling the browser what to do with these requests.

 

JavaScript Application Architecture On The Road To 2015

Service Worker enabling an offline experience for the Chrome Dev Summit site. This feature is available in Chrome 40 beta and above.

They’re what AppCache should have been, except with the right level of control. We can cache content, modify what is served and treat the network like an enhancement. You can learn more about Service Worker through Matt Gaunt’s excellent Service Worker primer or Jake Archibald’s masterful offline patterns article.

In 2015, I would like to see us evolve more routing and state-management libraries to be built on top of Service Workers. First-class offline and syncronization support for any routes you’ve navigated to would be huge, especially if developers can get them for next to free.

 

JavaScript Application Architecture On The Road To 2015

Sneak peek of the Push API working in Chrome for Android.

This would help us offer significant performance improvements for repeat visits through cached views and assets. Service Workers are also somewhat of a bedrock API and request control is only the first of a plethora of new functionality we may get on top of them, including Push Notifications and Background Syncronization.

To learn more about how to use Push Notifications in Chrome today, readPush Notifications & Service Worker by Matt Gaunt.

Component APIs and Facades

One could argue that the “facade pattern” I’ve touched on in previous literature is still viable today, especially if you don’t allow the implementation details of your component leak into its public API. If you are able to define a clean, robust interface to your component, consumers of it can continue to utilize said components without worrying about the implementation details . Those can change at any time with minimal breakage.

An addendum to this could be that this is a good model for framework and library authors to follow for public components they ship. While this is absolutely not tied to Web Components, I’ve enjoyed seeing the Polymer paper-* elements evolve over time with the machinery behind the scenes having minimal impact to public component APIs. This is inherently good for users. Try not to violate the principle of least astonishment i.e the users of your component API shouldn’t be surprised by behaviour. Hold this true and you’ll have happier users and a happier team.

Immutable & persistent data structures

In previous write-ups on large-scale JS, I haven’t really touched on immutability or persistent data structures. If you’ve crossed paths with libraries like immutable-js or Mori and been unclear on where their value lies, a quick primer may be useful.

An immutable data structure is one that can’t be modified after it has been created, meaning the way to efficiently modify it would be making a mutable copy. A persistent data structure is one which preserves the previous versions of itself when changed. Data structures like this are immutable (their state cannot be modified once created). Mutations don’t update the in-place structure, but rather generate a new updated structure. Anything pointing to the original structure has a guarantee it won’t ever change.

The real benefit behind persistent data structures is referential equality so it’s clear by comparing the address in memory, you have not only the same object but also the same data ~ Pascal Hartig, Twitter.

Let’s try to rationalize immutable data structures in the form of a Todo app. Imagine in our app we a normal JS array for our Todo items. There’s a reference to this array in memory and it has a specific value. A user then adds a new Todo item, changing the array. The array has now been mutated. In JavaScript, the in-memory reference to this array doesn’t change, but the value of what it is pointing to has.

For us to know if the value of our array has changed, we need to perform a comparison on each element in the array — an expensive operation. Let’s imagine that instead of a vanilla array, we have an immutable one. This could be created with immutable-js from Facebook or Mori. Modifying an item in the array, we get back a new array and a new array reference in memory.

If we were to go back and check the reference to our array in memory is the same, it’s guaranteed not to have changed. The value will be the same. This enables all kinds of things, like fast and efficient equality checking. As you’re only checking the reference rather than every value in the Todos array the operation is cheaper.

As mentioned, immutability should allow us to guarantee a data structure (e.g Todos) hasn’t been tampered. For example (rough code):

var todos = [‘Item 1', ‘Item 2', ‘Item 3'];
updateTodos(todos, newItem);
destructiveOpOnTodos(todos);
console.assert(todos.length === 3);

At the point we hit the assertion, it’s guaranteed that none of the ops since array creation have mutated it. This probably isn’t a huge deal if you’re strict about changing data structures, but this updates the guarantee from a “maybe” to a “definitely”.

I’ve previously walked through implementing an Undo stack using existing platform tech like Mutation Observers. If you’re working on a system using this, there’s a linear increase involved in the cost of memory usage. With persistent data structures, that memory usage can potentially be much smaller if your undo stack uses structural sharing.

Immutability comes with a number of benefits, including:

  • Typically destructive updates like adding, appending or removing on objects belonging to others can be performed without unwanted side-effects.
  • You can treat updates like expressions as each change generates a value.
    You get the ability to pass objects as arguments to functions and not worry about those functions mutating the object.
  • These benefits can be helpful for writing web apps, but it’s also possible to live without them as well and many have.

How does immutability relate to things like React? Well, let’s talk about application state. If state is represented by a data structure that is immutable, it is possible to check for reference equality right when making a call on re-rendering the app (or individual components). If the in-memory reference is equal, you’re pretty much guaranteed data behind the app or component hasn’t been changed. This allows you to bail and tell React that it doesn’t need to re-render.

What about Object.freeze? Where you to read through the MDN description of Object.freeze(), you might be curious as to why additional libraries are still required to solve the issue of immutability. Object.freeze() freezes an object, preventing new properties from being added to it, existing properties from being removed and prevents existing properties, or their enumerability, configurability, or writability, from being changed. In essence the object is made effectively immutable. Great, so...why isn’t this enough?

Well, you technically could use Object.freeze() to achieve immutability, however, the moment you need to modify those immutable objects you will need to perform a deep copy of the entire object, mutate the copy and then freeze it. This is often too slow to be of practical use in most use-cases. This is really where solutions like immutable-js and mori shine. They also don’t just assist with immutability — they make it more pleasant to work with persistent data structures if you care about avoiding destructive updates.

Are they worth the effort?

Immutable data structures (for some use-cases) make it easier to avoid thinking about the side-effects of your code. If working on a component or app where the underlying data may be changed by another entity, you don’t really have to worry if your data structures are immutable. Perhaps the main downside to immutability are the memory performance drawbacks, but again, this really depends on whether the objects you’re working with contain lots of data or not.

We have a long way to go yet

Beyond an agreement that composition is fundamentally good, we still disagree on a lot. Separation of concerns. Data-flow. Structure. The necessity for immutable data structures. Data-binding (two-way isn’t always better than one-way binding and depends on how you wish to model mutable state for your components). The correct level of magic for our abstractions. The right place to solve issues with our rendering pipelines (native vs. virtual diffing). How to hit 60fps in our user-interfaces. Templating (yes, we’re still not all using the template tag yet).

Onward and upward

Ultimately how you ‘solve’ these problems today comes down to asking yourself three questions:

1. Are you happy delegating such decisions and choices to an opinionated framework?
2. Are you happy ‘composing’ solutions to these problems using existing modules?
3. Are you happy crafting (from scratch) the architecture and pieces that solve these problems on your own?

I’m an idiot, still haven’t ‘figured’ this all out and am still learning as I go along. With that, please feel free to share your thoughts on the future of application architecture, whether there is more needed on the platform side, in patterns or in frameworks. If my take on any of the problems in this space is flawed (it may well be), please feel free to correct me or suggest more articles for my holiday reading list below:

Note: You’ll notice a lack of content here around Web Components. As my work with Chrome covers articles written on Web Components primitives and Polymer, I feel sufficiently coverered (maybe not!), but am always open to new explorations around these topics if folks have links to share.

I’m particularly interested in how we bridge the gaps between Web Components and Virtual-DOM diffing approaches, where we see component messaging patterns evolving here and what you feel the web is still missing for application authors.

上一篇:Block代码块中使用局部变量注意点


下一篇:Perl中的hash类型