#EmberJS2019 More Accessible Than Ever


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




It’s that time of year again: time to think about what the next year of Ember should hold.

Personally, I feel really great about the community’s effort around the Octane edition. What’s great about Octane, and any future edition we do, is that it’s a focus on polishing and documenting the features, and providing a clear transitional path from where we were to where we’re going.

Octane includes a lot of stuff we’ve been working on for a long time. Some highlights:

  • jQuery is no longer included by default in new apps
  • Glimmer components are the default components in Octane, which includes a new, massively slimmed down base class and angle bracket invocation.
  • Element modifiers are a new, more composable way to interact with the DOM from components.
  • Tracked Properties are the default way of managing data flow in Octane, which eliminates the need for a special computed property feature.

    Continue reading “#EmberJS2019 More Accessible Than Ever”

Fun With PowerShell: Let’s Get Started (Digging Deeper into “The Pipeline”)


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




In the first post in the Fun with PowerShell series, we wrote a little script that searched the Open Movie Database for movies containing the word “Avengers”.

We learned about Invoke-RestMethod, the syntax for invoking commands, the concept of “pipelines”, and the fact that anything we type in a script that isn’t assigned to a variable or passed into a pipeline is printed out. We also learned about redirection and the special $null variable, which allows us to redirect output into nothingness.

If you’re just interesting in learning enough PowerShell to be useful, feel free to move on to the second post in the series (Fun With PowerShell: Deduplicating Records). But if you’re curious to dig deeper, let’s unpack a few of the concepts that we learned in more detail.

Invocation Syntax

Like in other shells, you invoke a command in Powershell by mentioning it.

Get-Process
  Continue reading "Fun With PowerShell: Let’s Get Started (Digging Deeper into “The Pipeline”)"

Fun with PowerShell, Deduplicating Records


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




In the previous post, we got a list of Avengers movies from the Open Movie Database and printed it onto the screen.

$movies = Invoke-RestMethod "http://www.omdbapi.com/?apikey=$key&s=Avengers"

$movies.search | Format-List
Title  : The Avengers
Year   : 2012
imdbID : tt0848228
Type   : movie
Poster : https://m.media-amazon.com/images/M/MV5BNDYxNjQyMjAtNTdiOS00NGYwLWFmNTAtNThmYjU5ZGI2YTI1XkEyXkFqcGdeQXVyMTMxODk2OTU@._V1_SX300.jpg

Title  : Avengers: Age of Ultron
Year   : 2015
imdbID : tt2395427
Type   : movie
Poster : https://m.media-amazon.com/images/M/MV5BMTM4OGJmNWMtOTM4Ni00NTE3LTg3MDItZmQxYjc4N2JhNmUxXkEyXkFqcGdeQXVyNTgzMDMzMTg@._V1_SX300.jpg

Title  : Avengers: Infinity War
Year   : 2018
imdbID : tt4154756
Type   : movie
Poster : https://m.media-amazon.com/images/M/MV5BMjMxNjY2MDU1OV5BMl5BanBnXkFtZTgwNzY1MTUwNTM@._V1_SX300.jpg

Title  : The Avengers
Year   : 1998
imdbID : tt0118661
Type   : movie
Poster : https://m.media-amazon.com/images/M/MV5BYWE1NTdjOWQtYTQ2Ny00Nzc5LWExYzMtNmRlOThmOTE2N2I4XkEyXkFqcGdeQXVyNjUwNzk3NDc@._V1_SX300.jpg

Title  : The Avengers: Earth's Mightiest Heroes
Year   : 2010–2012
imdbID : tt1626038
Type   : series
Poster : https://m.media-amazon.com/images/M/MV5BYzA4ZjVhYzctZmI0NC00ZmIxLWFmYTgtOGIxMDYxODhmMGQ2XkEyXkFqcGdeQXVyNjExODE1MDc@._V1_SX300.jpg

Title  : Ultimate Avengers
Year   : 2006
imdbID : tt0491703
 Continue reading "Fun with PowerShell, Deduplicating Records"

PowerShell, Let’s Get Started!


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




PowerShell, Let's Get Started!

I’ve been having a lot of fun playing around with PowerShell recently, and wanted to write up some of my learnings.

If you’re thinking: “I don’t use Windows, so this post isn’t for me”, good news! Microsoft has released versions of PowerShell for OSX and Linux with easy installers, and I’ve tested the examples in this series on both Windows and Linux. I document the installation instructions in a separate post.

Let’s get started. We’ll interact with a REST API called the Open Movie Database. All of the APIs in OMDB require a (free) API key, so if you want to follow along, start by grabbing one.

Let’s create a new directory called powershell-explore and create a new file in that directory called omdb.ps1. The ps stands for “PowerShell” and the 1 stands for “maybe we’ll want a version 2 someday” (also .ps is taken by PostScript).

$key =  Continue reading "PowerShell, Let’s Get Started!"

PowerShell, Installation


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




In my post on exploring PowerShell, I jumped right in to exploring PowerShell.

If you’re on Windows, you already have PowerShell, and you can follow along.

If you’re on OSX, the easiest way to install PowerShell is

$ brew cask install powershell
$ pwsh

There are more details at Installing PowerShell Core on macOS (docs.microsoft.com).

On Ubuntu, it’s

$ sudo apt-get install -y powershell

There are more details, and instructions for other distributions at Installing PowerShell Core on Linux (docs.microsoft.com).

#Rust2018 – Exploring New Contribution and Feedback Models


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Since I’m coming pretty late to the #Rust2018 party, most of the things I wanted to say have already been said!

Ashley’s kick-off post was kind of a meta-#Rust2018 for me, calling for us to experiment with new ways to get community feedback in Rust. I personally really enjoyed all of the energy in #Rust2018, and hope that we continue to experiment on this front.

I really loved Julia’s post, both for enumerating so many ways that Rust has become easier to use since last year, but also for showing that marketing Rust to people who have never written low-level code before need not conflict with marketing Rust to people who want a better C++ (and other audiences too!).

I liked both Steve’s post and Nick’s post for showing that we’re already on track to have a great 2018, as long as we stay focused on shipping and Continue reading “#Rust2018 – Exploring New Contribution and Feedback Models”

The Facebook Patent License Punishes You For Suing Facebook, But Lets Them Sue You


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




There’s been a lot of discussion recently about the Facebook patent clause (“PATENTS” in Facebook repositories).

While most of the objections to the license have focused on the patent revocation provisions, most of the defense focuses on the patent grant. This has meant that both sides are talking past each other, and casual readers are getting confused about what this is all about.


The Facebook patent grant comes with a revocation clause. It is meant to protect Facebook from patent lawsuits in general. It applies to Facebook’s patents, and therefore the revocation clause does not apply to Facebook itself (by definition). Your license to Facebook’s patents in order to use the OSS is revoked if:

  • You sue Facebook for patent infringement, or
  • You sue someone for patent infringement for using a Facebook product or service, or
  • You sue someone for using the OSS software

Notably, it does nothing to punish Continue reading “The Facebook Patent License Punishes You For Suing Facebook, But Lets Them Sue You”

The Facebook Patent License Punishes You For Suing Facebook, But Lets Them Sue You


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




There’s been a lot of discussion recently about the Facebook patent clause ("PATENTS" in Facebook repositories).

While most of the objections to the license have focused on the patent revocation provisions, most of the defense focuses on the patent grant. This has meant that both sides are talking past each other, and casual readers are getting confused about what this is all about.


The Facebook patent grant comes with a revocation clause. It is meant to protect Facebook from patent lawsuits in general. It applies to Facebook’s patents, and therefore the revocation clause does not apply to Facebook itself (by definition). Your license to Facebook’s patents in order to use the OSS is revoked if:

  • You sue Facebook for patent infringement, or
  • You sue someone for patent infringement for using a Facebook product or service, or
  • You sue someone for using the OSS software

Notably, it does nothing to punish Continue reading “The Facebook Patent License Punishes You For Suing Facebook, But Lets Them Sue You”

The Glimmer VM: Boots Fast and Stays Fast


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Great web applications boot up fast and stay silky smooth once they’ve started.

In other contexts, applications can choose quick loading or responsiveness once they’ve loaded. Great games can get away with a long loading bar as long as they react instantly once the gamer gets going. In contrast, scripting languages like Ruby, Python or Bash optimize for instant boot, but run their programs more slowly.

To optimize for boot time, scripting languages use interpreters and avoid expensive compilation steps. To optimize for responsiveness, games pre-fill their caches and do as much work up front as they can get away with. The web demands that we do both at the same time: users coming from search results pages must see content within a second on modern devices, but also demand 60fps once the application gets going.

Over the years, web browsers have responded to this requirements with more JIT tiers

Fast Updates in Ember

Continue reading “The Glimmer VM: Boots Fast and Stays Fast”

The Glimmer VM: Boots Fast and Stays Fast


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Great web applications boot up fast and stay silky smooth once they’ve started.

In other contexts, applications can choose quick loading or responsiveness once they’ve loaded. Great games can get away with a long loading bar as long as they react instantly once the gamer gets going. In contrast, scripting languages like Ruby, Python or Bash optimize for instant boot, but run their programs more slowly.

To optimize for boot time, scripting languages use interpreters and avoid expensive compilation steps. To optimize for responsiveness, games pre-fill their caches and do as much work up front as they can get away with. The web demands that we do both at the same time: users coming from search results pages must see content within a second on modern devices, but also demand 60fps once the application gets going.

Over the years, web browsers have responded to this requirements with more JIT tiers

Fast Updates in Ember

Continue reading “The Glimmer VM: Boots Fast and Stays Fast”

Why I’m Working on Yarn


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




(This post is about Yarn, a new JS package manager that was announced today.)

I work with Node and npm packages almost every day, on Tilde’s main app, Skylight, or on one of Ember’s many packages.

Many have remarked upon how fast the npm registry has grown, and it’s hard to imagine working on any of my packages without the npm ecosystem.

I’ve also worked on a couple of application-level package managers (Bundler for Ruby and Cargo for Rust), so it’s no surprise that people have routinely asked me whether I’d consider writing a "bundler for node".

While it’s something I considered idly from time to time, the truth is that for all of the complaints people have about the official client, it does a whole lot that people rely on, and the npm team has done a lot to improve it over the years. I genuinely respect Continue reading “Why I’m Working on Yarn”

Why I’m Working on Yarn


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




(This post is about Yarn, a new JS package manager that was announced today.)

I work with Node and npm packages almost every day, on Tilde’s main app, Skylight, or on one of Ember’s many packages.

Many have remarked upon how fast the npm registry has grown, and it’s hard to imagine working on any of my packages without the npm ecosystem.

I’ve also worked on a couple of application-level package managers (Bundler for Ruby and Cargo for Rust), so it’s no surprise that people have routinely asked me whether I’d consider writing a “bundler for node”.

While it’s something I considered idly from time to time, the truth is that for all of the complaints people have about the official client, it does a whole lot that people rely on, and the npm team has done a lot to improve it over the years. I genuinely respect Continue reading “Why I’m Working on Yarn”

An Extensible Approach to Browser Security Policy


This post is by from Katz Got Your Tongue?


Click here to view on the original site: Original Post




Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of CSP, a website includes a new header (Content-Security-Policy) which can contain a number of directives.

For example, in order to prevent the browser from making any network requests to cross-domain resources, a server can return this header:

Content-Security-Policy: default-src 'self'

This instructs the browser to restrict all network requests to the current domain. This includes images, stylesheets, and fonts. Essentially, this means that scripts run on your page will be unable to send data to third-party domains, which is a common source of security vulnerabilities.

If you want to allow the browser to make requests to its own domain, plus the Google Ajax CDN, your server can do this:

Content-Security-Policy: default-src 'self' ajax.googleapis.com

Factoring the Network Layer

If you look at what CSP is doing, it’s essentially a syntax for controlling what the network stack is allowed to do.

There are other parts of the web platform that likewise control the network stack, and more all the time. What you’d like is for all of these features to be defined in terms of some lower-level primitive—ideally, one that was also exposed to JavaScript itself for more fine-grained, programmatic tweaks.

Imagine that you had the ability to intercept network requests programmatically, and decide whether to allow the request to continue. You might have an API something like this:

var origin = window.location.origin;
 
page.addEventListener('fetch', function(e) {
  var url = e.request.url;
  if (origin !== url.origin) {
    // block the network request
    e.preventDefault();
  }
 
  // otherwise, allow the network request through
});

You would then be able to describe how the browser interprets CSP in terms of this primitive API.

You could even imagine writing a CSP library purely in JavaScript!

page.addEventListener('fetch', function(e) {
  if (e.type === 'navigate') {
    e.respondWith(networkFetch(url).then(function(response) {
      // extract CSP headers and associate them with e.window.id
      // this is a pseudo-API to keep the implementation simple
      CSP.setup(e.window.id, response);
 
      return response;
    });
  } else {
    if (!CSP.isAllowed(e.window.id, e.request)) {
      e.preventDefault();
    }
  }
});

The semantics of CSP itself can be expressed in pure JavaScript, so these primitives are enough to build the entire system ourselves!

I have to confess, I’ve been hiding something from you. There is already a proposal to provide exactly these network layer hooks. It even has exactly the API I showed above.

The Extensible Web

Extensible web principles give us a very simple path forward.

Continue iterating on the declarative form of the Content Security Policy, but describe it in terms of the same primitives that power the Navigation Controller proposal.

When web developers want to tweak or extend the built-in security features, they can write a library that intercepts requests and applies tweaks to the policy by extending the existing header syntax.

If all goes well, those extensions will feed into the next iteration of CSP, giving us a clean way to let platform users inform the next generation of the platform.

This approach also improves the likelihood that other features that involve the network stack will compose well with CSP, since they will also be written in terms of this lower level primitive.

Many of the benefits that Dave Herman outlined in the closing of my last post are brought into concrete terms in this example.

I hope to write more posts that explore how extensible web principles apply to platform APIs, both new and old.


Fellow web developers, let’s persuade Adam Barth, Dan Veditz, Mike West (the CSP specification editors) to factor the next version of CSP in terms of the new Navigation Controller specification.

Then, we will have the tools we need to extend the web’s security model forward.

An Extensible Approach to Browser Security Policy


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of Continue reading “An Extensible Approach to Browser Security Policy”

An Extensible Approach to Browser Security Policy


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of Continue reading “An Extensible Approach to Browser Security Policy”

Extend the Web Forward


This post is by from Katz Got Your Tongue?


Click here to view on the original site: Original Post




If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it works, it brings us querySelectorAll, the template element, and Object.observe.

The Sad Truth

The sad truth is that while some areas of the browser are extremely extensible, other areas are nearly impossible to extend.

Some examples include the behavior and lifecycle of custom element in HTML, the CSS syntax, and the way that the browser loads an HTML document in the first place. This makes it hard to extend HTML, CSS, or build libraries that support interesting offline capabilities.

And even in some places that support extensibility, library developers have to completely rewrite systems that already exist. For example, John Resig had to rewrite the selector engine from scratch just to add a few additional pseudo-properties, and there is still no way add custom pseudo-properties to querySelectorAll.

Declarative vs. Imperative

A lot of people see this as a desire to write everything using low-level JavaScript APIs, forever.

No.

If things are working well, JavaScript library authors write new declarative APIs that the browser can roll in. Nobody wants to write everything using low-level calls to canvas, but we’re happy that canvas lets us express low-level things that we can evolve and refine.

The alternative, that web authors are stuck with only the declarative APIs that standards bodies have invented, is too limiting, and breaks the virtuous cycle that allows web developers to invent and iterate on new high-level features for the browser.

In short, we want to extend the web forward with new high-level APIs, but that means we need extension points we can use.

Explaining the Magic

If we want to let web authors extend the web forward, the best way to do that is to explain existing and new high-level forms in terms of low-level APIs.

A good example of in-progress work along these lines in Web Components, which explains how elements work in terms of APIs that are exposed to JavaScript. This means that if a new custom element becomes popular, it’s a short hop to implementing it natively, because the JavaScript implementation is not a parallel universe; it’s implemented in terms of the same concepts as native elements.

That doesn’t necessarily mean that browsers will simply rubber-stamp popular components, but by giving library authors the the tools to make components with native-like interfaces, it will be easy for vendors to synthesize web developer zeitgeist into something standard.

Another example is offline support. Right now, we have the much-derided AppCache, which is a declarative-only API that makes it possible to display an HTML page, along with its assets, even if the browser is offline.

AppCache is not implemented in terms of a low-level JavaScript API, so when web developers discovered problems with it, we had no way to extend or modify it to meet our needs. This also meant that we had no way to show the browser vendors what kinds of solutions would work for us.

Instead, we ended up with years of stagnation, philosophical disagreements and deadlock between web developers and specification editors, and no way to move forward.

What we need instead is something like Alex Russell’s proposal that allows applications to install JavaScript code in the cache that intercepts HTTP requests from the page and can fulfill them, even when the app is offline. With an API like this, the current AppCache could be written as a library!

Something like Jonas Sicking’s app cache manifest is a great companion proposal, giving us a nice starting point for a high-level API. But this time if the high-level API doesn’t work, we can fix it by using the low-level API to tweak and improve the manifest.

We can extend the web forward.

Extensions != Rewriting

It’s important to note that web developers don’t want a high level API and then a cliff into the low-level API.

Today, while you can implement custom elements or extend the selector engine, you can only do this by rewriting large chunks of the stack alongside the feature you want.

Real extensibility means an architecture that lets you tweak, not rewrite. For example, it would be possible to add custom rules to CSS by writing a full selector engine and application engine, and apply rules via .style as the DOM changes. With mutation observers, this might even be feasible. In fact, this is how some of the most devious hacks in the platform today (like the Polymer Shadow DOM polyfill) actually work.

That kind of “extensibility” doesn’t fit the bill. It doesn’t compose well with other extensions, defeats the browser’s ability to do performance work on unrelated parts of the stack (because the entire stack had to be rewritten), and is too hard to provide meaningful iteration.

Browser implementers are often wary of providing extension points that can be performance footguns. The biggest footgun is using libraries that rewrite the entire stack in JavaScript, and whole-stack-rewriting strategies are the tactic du jour today. For performance, we have little to lose and much to gain by making extensions more granular.

Extend the Web Forward

So what do we gain from a more extensible web? I’ll let Dave Herman, a member of TC39, answer that for me.

  • When you design new APIs, you are forced to think about how the existing system can express most of the semantics. This cleanly separates what new power is genuinely needed and what isn’t. This prevents cluttering the semantics with unnecessary new magic
  • Avoiding new magic avoids new security surface area
  • Avoiding new magic avoids new complexity (and therefore bugs) in implementation
  • Avoiding new magic makes more of the new APIs polyfillable
  • Being more polyfillable means people can ramp up faster, leading to faster adoption and evolution of the platform
  • Avoiding new magic means that optimizations in the engines can focus on the stable core, which affects more of new APIs as they are added. This leads to better performance with less implementation effort
  • Avoiding new magic means less developer education required; people can understand new APIs more easily when they come out, because they build off of known concepts
  • This means that the underlying platform gets fleshed out to be expressive enough to prototype new ideas. Library authors can experiment with new features and create more cowpaths to fill the Web API pipeline

All this, and more! There’s something for everybody!

Implementors and web developers: let’s work together to extend the web forward!

Extend the Web Forward


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it Continue reading “Extend the Web Forward”

Extend the Web Forward


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it Continue reading “Extend the Web Forward”

I’m Running to Reform the W3C’s TAG


This post is by from Katz Got Your Tongue?


Click here to view on the original site: Original Post




Elections for the W3C’s Technical Architecture Group are underway, and I’m running!

There are nine candidates for four open seats. Among the nine candidates, Alex Russell, Anne van Kesteren, Peter Linss, and Marcos Cáceres are running on a reform platform. What is the TAG, and what do I mean by reform?

What is the TAG?

According to the TAG’s charter, it has several roles:

  • to document and build consensus around principles of Web architecture
  • to interpret and clarify these principles when necessary
  • to resolve issues involving general Web architecture brought to the TAG
  • to help coordinate cross-technology architecture developments inside and outside W3C

As Alex has said before, the existing web architecture needs reform that would make it more layered. We should be able to explain the declarative parts of the spec (like markup) in terms of lower level primitives that compose well and that developers can use for other purposes.

And the W3C must coordinate much more closely with TC39, the (very active) committee that is designing the future of JavaScript. As a member of both TC39 and the W3C, I believe that it is vital that as we build the future of the web platform, both organizations work closely together to ensure that the future is both structurally coherent and pleasant for developers of the web platform to use.

Developers

I am running as a full-time developer on the web platform to bring that perspective to the TAG.

For the past several years, I have lobbied for more developer involvement in the standards process through the jQuery organization. This year, the jQuery Foundation joined both the W3C and ECMA, giving web developers direct representatives in the consensus-building process of building the future.

Many web developers take a very cynical attitude towards the standards process, still burned from the flames of the first browser wars. As a group, web developers also have a very pragmatic perspective: because we can’t use new features in the short-term, it’s very costly to take an early interest in standards that aren’t even done yet.

Of course, as a group, we developers don’t hesitate to complain about standards that didn’t turn out the way we would like.

(The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHOULD”, “SHOULD NOT”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC2119.)

The W3C and its working groups MUST continue to evangelize to developers about the importance of participating early and often. We MUST help more developers understand the virtues of broad solutions and looking beyond specific present-day scenarios. And we MUST evolve to think of web developers not simply as “authors” of content, but as sophisticated developers on the most popular software development platform ever conceived.

Layering

When working with Tom Dale on Ember.js, we often joke that our APIs are layered, like a delicious cake.

What we mean by layering is that our high-level features are built upon publicly exposed lower-level primitives. This gives us the freedom to experiment with easy-to-use concise APIs, while making it possible for people with slightly different needs to still make use of our hard implementation work. In many cases, such as in our data abstraction, we have multiple layers, making it possible for people to implement their requirements at the appropriate level of abstraction.

It can be tempting to build primitives and leave it up to third parties to build the higher level APIs. It can also be tempting to build higher level APIs only for particular scenarios, to quickly solve a problem.

Both approaches are prevalent on the web platform. Specs like IndexedDB are built at a very low level of abstraction, leaving it up to library authors to build a higher level of abstraction. In contrast, features like App Cache are built at a high level of abstraction, for a particular use-case, with no lower level primitives to use if a user’s exact requirements do not match the assumptions of the specification.

Alex’s effort on this topic is focused on Web Components and Shadow DOM, an effort to explain the semantics of existing tags in terms of lower-level primitives. These primitives allow web developers to create new kinds of elements that can have a similar level of sophistication to the built-in elements. Eventually, it should be possible to describe how existing elements work in terms of these new primitives.

Here’s another example a layer deeper: many parts of the DOM API have magic behavior that are extremely difficult to explain in terms of the exposed API of ECMAScript 3. For example, the innerHTML property has side-effects, and ES3 does not provide a mechanism for declaring setters. The ECMAScript 5 specification provides some additional primitives that make it possible to explain more of the existing DOM behavior in terms of JavaScript. While designing ECMAScript 6, the committee has repeatedly discussed how certain new features could help explain more of the DOM API.

Today, the web platform inherits a large number of existing specifications designed at one of the ends of the layering spectrum. I would like to see the TAG make an explicit effort to describe how the working groups can reform existing APIs to have better layering semantics, and to encourage them to build new specifications with layering in mind.

TC39 and JavaScript

Today, developers of the web platform increasingly use JavaScript to develop full-blown applications that compete with their native counterparts.

This has led to a renaissance in JavaScript implementations, and more focus on the ECMAScript specification itself by TC39. It is important that the evolution of JavaScript and the DOM APIs take one another into consideration, so that developers perceive them as harmonious, rather than awkward and ungainly.

Any developer who has worked with NodeList and related APIs knows that the discrepancies between DOM Array-likes and JavaScript Arrays cause pain. Alex has talked before about how standardizing subclassing of built-in object would improve this situation. This would allow the W3C to explicitly subclass Array for its Array-like constructs in a well-understood, compatible way. That proposal will be strongest if it is championed by active members of both TC39 and the HTML working group.

Similarly, TC39 has worked tirelessly on a proposal for loading JavaScript in an environment-agnostic way (the “modules” proposal). That proposal, especially the aspects that could impact the network stack, would be stronger with the direct involvement of an interested member of a relevant W3C working group.

As the web’s development picks up pace, the W3C cannot see itself as an organization that interacts with ECMA at the periphery. It must see itself as a close partner with TC39 in the development and evolution of the web platform.

Progress

If that (and Alex’s similar post) sounds like progress to you, I’d appreciate your organization’s vote. My fellow reformers Alex Russell, Anne van Kesteren, Peter Linss and Marcos Cáceres are also running for reform.

AC reps for each organization can vote here and have 4 votes to allocate in this election. Voting closes near the end of the month, and it’s also holiday season, so if you work at a member organization and aren’t the AC rep, please, find out who that person in your organization is and make sure they vote.

As Alex said:

The TAG can’t fix the web or the W3C, but I believe that with the right people involved it can do a lot more to help the well-intentioned people who are hard at work in the WGs to build in smarter ways that pay all of us back in the long run.

I’m Running to Reform the W3C’s TAG


This post is by Yehuda Katz from Katz Got Your Tongue


Click here to view on the original site: Original Post




Elections for the W3C’s Technical Architecture Group are underway, and I’m running!

There are nine candidates for four open seats. Among the nine candidates, Alex Russell, Anne van Kesteren, Peter Linss, and Marcos Cáceres are running on a reform platform. What is the TAG, and what do I mean by reform?

What is the TAG?

According to the TAG’s charter, it has several roles:

  • to document and build consensus around principles of Web architecture
  • to interpret and clarify these principles when necessary
  • to resolve issues involving general Web architecture brought to the TAG
  • to help coordinate cross-technology architecture developments inside and outside W3C

As Alex has said before, the existing web architecture needs reform that would make it more layered. We should be able to explain the declarative parts of the spec (like markup) in terms of lower level primitives that compose well and that developers can use for Continue reading “I’m Running to Reform the W3C’s TAG”