If a client was in the process of receiving some binary attachments
when the connection was abruptly closed, then the manager would call
`decoder.destroy()` ([1]) but was then stuck in a "parse error" loop
upon reconnection (since it expected a binary attachment and not a
CONNECT packet).
[1]: a1c528b089/lib/manager.ts (L520)
zuul is now archived [1] and does not support the new W3C WebDriver
protocol, since it relies on the wd package [2] under the hood, which
uses the (now deprecated) JSON Wire Protocol.
We will now use the webdriver.io test framework, which allows to run
our tests in local and on Sauce Labs (cross-browser and mobile tests).
This allows us to run our tests on latest versions of Android and iOS,
since Sauce Labs only supports the W3C WebDriver protocol for these
platforms ([3]).
[1]: https://github.com/defunctzombie/zuul
[2]: https://github.com/admc/wd
[3]: https://docs.saucelabs.com/dev/w3c-webdriver-capabilities/
The binary detection was moved from the parser to the client/server in
[1], in order to allow the user to skip the binary detection for huge
JSON payloads.
```js
socket.binary(false).emit(...);
```
The binary detection is needed in the default parser, because the
payload is encoded with JSON.stringify(), which does not support binary
content (ArrayBuffer, Blob, ...).
But other parsers (like [2] or [3]) do not need this check, so we'll
move the binary detection back here and remove the socket.binary()
method, as this use case is now covered by the ability to provide your
own parser.
Note: the hasBinary method was copied from [4].
[1]: f44256c523
[2]: https://github.com/darrachequesne/socket.io-msgpack-parser
[3]: https://github.com/darrachequesne/socket.io-json-parser
[4]: https://github.com/darrachequesne/has-binary
This was needed in a previous version of the parser, which used msgpack
to encode the payload.
Blobs (and Files) will now be included in the array of binary
attachments without any additional transformation.
Breaking change: the encode method is now synchronous
See also 299849b002
The assertions were not checked, because the functions are asynchronous.
Besides, the Blob tests were throwing in the browser:
> Uncaught ReferenceError: can't access lexical declaration 'BlobBuilder' before initialization
Separated the encoding and decoding into two public-facing objects,
Encoder and Decoder.
Both objects take nothing on construction. Encoder has a single method,
encode, that mimics the previous version's function encode (takes a
packet object and a callback). Decoder has a single method too, add, that
takes any object (packet string or binary data). Decoder emits a 'decoded'
event when it has received all of the parts of a packet. The only
parameter for the decoded event is the reconstructed packet.
I am hesitant about the Encoder.encode vs Decoder.add thing. Should it be
more consistent, or should it stay like this where the function names are
more descriptive?
Also, rewrote the test helper functions to deal with new event-based
decoding. Wrote a new test in test/arraybuffer.js that tests for memory
leaks in Decoder as well.
This is a squash of a few commits. Below is a small summary of commits.
Results from it: before the build size of socket.io-client was ~250K.
Now it is ~215K.
Tests I was doing here
(https://github.com/kevin-roark/socketio-binaryexample/tree/speed-testing)
take about 1/4 - 1/5 as long with this commit compared to msgpack.
The first was the initial rewrite of the encoding, which removes msgpack
and instead uses a sequence of engine.write's for a binary event. The
first write is the packet metadata with placeholders in the json for
any binary data. Then the following events are the raw binary data that
get filled by the placeholders.
The second commit was bug fixes that made the tests pass.
The third commit was removing unnecssary packages from package.json.
Fourth commit was adding nice comments, and 5th commit was merging
upstream.
The remaining commits involved merging with actual socket.io-parser,
rather than the protocol repository. Oops.