Skip to main content

incremental delivery with deduplication + concurrent execution

At a glance


These spec edits should correspond to the working implementation of incremental delivery with deduplication currently posted as PR stack: => introduces Publisher => bulk of effort, introduces deduplication => adds pending

These spec edits do not currently include optional follow-on: => payload consolidation

[The diff to main might be helpful, but this is built on top of the amazing #742 and so the diff from that branch could be more useful.]


These are hopefully trending toward complete in terms of the algorithm -- more explanatory prose should definitely be added.


The implementation and spec changes show how one can start executing deferred fragments semi-immediately (i.e. after deferring in an implementation-specific way), rather than waiting for the entire initial result to be emitted. This is not required -- one could still be compliant with the spec by deferring all the way until the initial result completes! In fact, how one defers is not per se observable and so the spec cannot mandate much about it with great normative force. But -- and I think this is important -- this PR and the implementation PR provide an algorithm/implementation/spec changes that give servers the flexibility to do what they think is right in that regard, and that might be desirable.

As of this moment, I am fairly confident in the implementation PR over at graphql-js, and the spec PR should generally correspond, demonstrating:

= the Field Group and Defer Usage record types that contain the information derived from the operation during field collection

= the introduction of a distinction between Incremental Data records and Subsequent Result records. Deferred Fragment records and Stream Items records exemplify Subsequent Results that are sent as a group to the client. But an individual Deferred Fragment Record may consist of a number of distinct Deferred Grouped Field Set records, which may overlap with other Deferred Fragment Records and should not be sent more than once. Deferred Grouped Field Set records are therefore a unit of Incremental Data and are tracked with a new record type. Stream Items records always contain a single unit of incremental data that is sent only once with little complication; they therefore represent both Subsequent Result and Incremental Data Records.

= the new Publisher construct, with a set of algorithms that create and manipulate Subsequent Result and Incremental Data records. Mutation of these records is not performed directly during execution, but only via interaction with the Publisher.

= the deferMap, which maps Defer Usage records to individual Deferred Fragment Subsequent Result records

** huge thanks to @urigo and @dotansimha of the guild for sponsoring my open-source work on this. **