Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement support for Websockets #178

Closed
ragnarlonn opened this issue Apr 10, 2017 · 17 comments
Closed

Implement support for Websockets #178

ragnarlonn opened this issue Apr 10, 2017 · 17 comments

Comments

@ragnarlonn
Copy link

Using maybe https://github.com/gorilla/websocket

@liclac can you foresee a lot of core engine modifications needed to support Websockets, or should it be fairly straightforward to just add another k6 module that implements (agreed-upon) script API functions for Websockets communication?

@liclac
Copy link
Contributor

liclac commented Apr 10, 2017

gorilla/websocket is good, it's what everyone uses (the official docs even defer to them), but it should be easy enough to build on the JS2 API! Just needs a solid API design

@micsjo
Copy link
Contributor

micsjo commented Apr 10, 2017

Just consider the fact there is a need for asynchronous handling as websockets are used. Websockets not only respond, they also push data. I haven't looked at gorilla/websocket but assume there are provisions for this.

I would like to really re-emphasize that a really good API design is necessary for a useful implementation.

When designing APIS for websockets, perhaps also consider protocols such as SignalR which prioritze implementation in fallback order:

  1. websockets
  2. Server Side Events
  3. Forever frames
  4. long polling

Forever frames can probably be safely ignored, only used by a few versions of IE.

SSE is supported by all the major browsers as well as websockets. Long polling is just plain loooong requests. The thing is they are all asynchronous so again, just some input for careful API design.

@liclac
Copy link
Contributor

liclac commented Apr 10, 2017

The most feasible architecture I can see for this is essentially a blocking run loop of sorts, that uses callbacks to handle different events, and is shut down after it's done. The big catch on the technical side is going to be that we need to be very careful with locking to make sure no more than one bit of JS is executing at any one time, but the implementation will be rather simple.

The big thing, as @micsjo says, is going to be API design: I don't believe we have any real predecent to look at, and we need to develop something that works with a wide range of applications. Our best frame of reference will probably be to look at popular websocket libs, eg. the pure websocket API, but also SignalR, Faye, Engine.js, Socket.js, etc.

@micsjo
Copy link
Contributor

micsjo commented Apr 11, 2017

LoadRunner has supported websockets for almost three years.

LR websockets

There's also a jMeter plugin for this

jMeter websocket plugin

I have no idea on how good or bad the jMeter plugin is though.

@ragnarlonn
Copy link
Author

There is a bounty on this issue now: https://www.bountysource.com/issues/43973195-implement-support-for-websockets

Note to bountyhunters: this issue has not been terribly well specified and will probably need some discussion and specification before implementation can be started.

@denim2x
Copy link

denim2x commented May 8, 2017

@ragnarlonn Here are some WebSocket API suggestions:

@gbts
Copy link
Contributor

gbts commented May 15, 2017

OK, so after looking around what the other tools & WS frameworks are doing, here's my API proposal. This mostly follows the API from ws and I think it will be both familiar to users and relatively straightforward to implement. Obviously this is a simplified version and the final API will handle more events & options.

import websocket from "k6/websocket";

export default function () {
    var result = websocket.connect("wss://echo.websocket.org", function(socket) {
        socket.on('open', function open() {
            console.log('connected');
            socket.send(Date.now());
        });
        
        socket.on('message', function incoming(data, flags) {
            console.log(`Roundtrip time: ${Date.now() - data} ms`, flags);
            socket.setTimeout(function timeout() {
                socket.send(Date.now());
            }, 500);
        });

        socket.on('close', function close() {
            console.log('disconnected');
        });
    });
};

The key interface difference with ws is that the lifecycle of the socket is inside the connect function (in a way it's more similar to the server-side API).

From an implementation viewpoint, this means that we can contain the websocket inside a single blocking Connect function which will wrap the gorilla/websocket loop (i.e. something similar to this).

So each iteration for a VU will enter the blocking loop when calling connect, which will in turn call its second argument to register the event handlers (stored internally in a Socket structure whose lifetime follows each Connect call). Calls to send will feed a buffer in Socket watched by the main select loop, which will in turn pass it to c.WriteMessage.

A special case addition to the API are the socket.setTimeout and socket.setInterval, which are particularly useful in these testing scenarios and I think should be available here. With this design it means that each callable will be a case in the main select loop. I think they can be implemented using ticker & timer channels that pass a goja.Callable, either by passing them to reflect.Select or by multiplexing them into a single channel.

Anyway, I hope I managed to explain what I have in mind. Let me if it makes sense and I can start working on the implementation.

@liclac
Copy link
Contributor

liclac commented May 15, 2017

I like this a lot, go right ahead!

One thing that will have to change before this works is how k6 counts iterations; right now, all samples for an iteration that had its tail cut off by context cancellation are discarded (we don't wanna flood the user with "context cancelled" errors), but with websockets added, it would no longer be strange for VUs to have only a single iteration… I'll make an issue for more granular cutoffs.

@gbts
Copy link
Contributor

gbts commented May 19, 2017

I just submitted a draft implementation with PR #228 so that you can take a look. It mostly follows the design I proposed above.

From a functionality viewpoint it's working pretty well, the main thing missing right now (except tests) is collecting some metrics/samples from each websocket session. I was thinking about replicating what LR or JMeter are doing but I though I'd ask you for some feedback first on what would be the best approach here.

@liclac
Copy link
Contributor

liclac commented May 19, 2017

I'm not sure what either of those do, but really, just collect everything you possibly can - connection time, round trip time, etc. You probably want some way of passing extra tags to basically everything that emits metrics as well.

@gbts
Copy link
Contributor

gbts commented May 27, 2017

A more complete implementation is now on #228 available for review.

@liclac liclac closed this as completed Jun 13, 2017
@gbts
Copy link
Contributor

gbts commented Jun 14, 2017

Hi @ragnarlonn , just checked the docs and everything seems to be in order. Let me know if anything is unclear or if any issues come up in production!

@jrocketfingers
Copy link

I know I might be late to the party, but it seems that

So each iteration for a VU will enter the blocking loop when calling connect, which will in turn call its second argument to register the event handlers (stored internally in a Socket structure whose lifetime follows each Connect call).

prevents us from testing the load with several sockets connecting, no? I understood that a VU in k6 would represent a behavior of a user -- in this case the particular user would open a WS connection. If that blocks, we can't scale the number of VUs. Am I wrong in thinking this?

What led me to assume this is the following result when running ./k6 run --vus 1000 ws.js:

          /\      |‾‾|  /‾‾/  /‾/   
     /\  /  \     |  |_/  /  / /   
    /  \/    \    |      |  /  ‾‾\  
   /          \   |  |‾\  \ | (_) | 
  / __________ \  |__|  \__\ \___/  Welcome to k6 v0.17.1!

  execution: local
     output: -
     script: /home/j/projects/stress-testing/ws.js (js)

   duration: 0s, iterations: 1
        vus: 1000, max: 1000

    web ui: http://127.0.0.1:6565/

      done [==========================================================]      10.1s / 10.1s

    data_received.........: 30 kB (3.0 kB/s)
    data_sent.............: 245 B (24 B/s)
    vus...................: 1000
    vus_max...............: 1000
    ws_connecting.........: avg=175.39ms max=175.39ms med=175.39ms min=175.39ms p(90)=175.39ms p(95)=175.39ms
    ws_msgs_received......: 608 (60.8/s)
    ws_session_duration...: avg=10.17s max=10.17s med=10.17s min=10.17s p(90)=10.17s p(95)=10.17s
    ws_sessions...........: 1 (0.1/s)

As far as I can tell, only a single socket has been opened. Would that be correct?

@gbts
Copy link
Contributor

gbts commented Aug 11, 2017

Hi @jrocketfingers , no, that's not correct. VUs are run in parallel and are fully independent from each other, which means that each VU will open its own socket and will have its own blocking loop.

What is meant by "blocking loop" here is that the JS runner for each VU blocks when ws.connect is called and will act as an event loop for the websocket from that point. It will call each of the registered event handlers until the connection is closed, at which point the blocking loop stops and the VU will execute the rest of the script.

The results you posted here do look like what you'd get from a single iteration for a single VU which might point to some other issue (perhaps the server only accepts a single connection per IP?), but you can try running the websocket example script provided with k6 to verify this.

@jrocketfingers
Copy link

Thanks for the clarification @gbts. That's how I understood it when I first found out about the tool. Coming back to it, I've expected the behavior, but got the following report, as well as a single worker launched (and utilizing a single core). The server rejecting connections isn't an issue, as I've been successfully stress testing it using artillery and a custom ws client, both running with several thousand connections.

In any case, sorry for hijacking the issue, the report threw me off and I thought it was due to the particular implementation of the WS module. I'll open another issue if I don't resolve this in the meantime.

@gbts
Copy link
Contributor

gbts commented Aug 11, 2017

No worries and thanks for reporting this. If you do open a new issue, feel free to ping me since I'm not a regular contributor here.

@artsambrano
Copy link

artsambrano commented May 14, 2018

@jrocketfingers Would just like to ask if you happen to figure out the issue about the ws connection given certain vus? I'm encountering the same issue and been really trying hard to look for any resource regarding the concurrency execution on k6 websockets but with no luck. I've noticed that my http requests are being executed simultaneously(the same as the number of vus specified), however, when it comes to my next flow(websocket request) it seems that k6 is executing it one vu after another. Would just like to ask if you happen to have any work around or if ip spoofing(?) might be of help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants