Chapter 8. Tyrus proprietary configuration

Following settings do have influence on Tyrus behaviour and are NOT part of WebSocket specification. If you are using following configurable options, your application might not be easily transferable to other WebSocket API implementation.

8.1. Client-side SSL configuration

When accessing "wss" URLs, Tyrus client will pick up whatever keystore and truststore is actually set for current JVM instance, but that might not be always convenient. WebSocket API does not have this feature (yet, see WEBSOCKET_SPEC-210), so Tyrus exposed two SSL configuration classes SSLEngineConfigurator and SslEngineConfigurator , which can be used for specifying all SSL parameters to be used with current client instance. The former configuration class belongs to Grizzly configuration API and therefore works only with Grizzly client. The latter configuration class works with both Grizzly and JDK client and offers some extensions over the Grizzly SSLEngineConfigurator allowing more control of host verification during the SSL handshake. For more details please refer to the following subsection dealing with host verification. Additionally, WebSocket API does not have anything like a client, only WebSocketContainer and it does not have any properties, so you need to use Tyrus specific class - ClientManager.

final ClientManager client = ClientManager.createClient();

System.getProperties().put("", "all");
System.getProperties().put(SSLContextConfigurator.KEY_STORE_FILE, "...");
System.getProperties().put(SSLContextConfigurator.TRUST_STORE_FILE, "...");
System.getProperties().put(SSLContextConfigurator.KEY_STORE_PASSWORD, "...");
System.getProperties().put(SSLContextConfigurator.TRUST_STORE_PASSWORD, "...");
final SSLContextConfigurator defaultConfig = new SSLContextConfigurator();

    // or setup SSLContextConfigurator using its API.

SSLEngineConfigurator sslEngineConfigurator =
    new SSLEngineConfigurator(defaultConfig, true, false, false);
client.connectToServer(... , ClientEndpointConfig.Builder.create().build(),
    new URI("wss://localhost:8181/sample-echo/echo"));

If there seems to be a problem with Tyrus SSL connection, it is strongly recommended to use system property as it provides invaluable information for troubleshooting.

8.1.1. Host verification

One of the key steps when establishing SSL connections is verifying that the host on the certificate sent by the server matches the host Tyrus client tries to connect to and thus preventing a possibility of a man-in-the-middle attack. Host verification is turned on by default in Tyrus, which means that Tyrus will automatically check that the host provided in the URI in

client.connectToServer(... , new URI("wss://target-server:8181/application/endpoint"));

matches exactly the host the certificate has been issued for. Exact match is the key word in the previous sentence as host can be either hostname or IP address and those two cannot be used interchangeably. For instance when a certificate has been issued for "localhost", establishing an SSL connection to "wss://" will fail as the host does not match the one in the certificate.

The default host verification can be too restrictive for some cases and therefore Tyrus provides users with means to to either disable the host verification (highly unrecommended in production) or to implement their own host verifier. Providing custom host verifier will disable the default one. It is also important to note that Grizzly specific SSLEngineConfigurator does not provide these options and for modifying the default host name verification policy SslEngineConfigurator must be used instead. The following sample shows how to disable host name verification:

SslEngineConfigurator sslEngineConfigurator = new SslEngineConfigurator(new SslContextConfigurator());
client.getProperties().put(ClientProperties.SSL_ENGINE_CONFIGURATOR, sslEngineConfigurator);

The following sample shows how to register a custom host verifier:

SslEngineConfigurator sslEngineConfigurator = new SslEngineConfigurator(new SslContextConfigurator());
sslEngineConfigurator.setHostnameVerifier(new HostnameVerifier() {
    public boolean verify(String host, SSLSession sslSession) {
        Certificate certificate = sslSession.getPeerCertificates()[0];
        // validate the host in the certificate
client.getProperties().put(ClientProperties.SSL_ENGINE_CONFIGURATOR, sslEngineConfigurator);

8.2. Asynchronous connectToServer methods

WebSocketContainer.connectToServer(...) methods are by definition blocking - declared exceptions needs to be thrown after connection attempt is made and it returns Session instance, which needs to be ready for sending messages and invoking other methods, which require already established connection.

Existing connectToServer methods are fine for lots of uses, but it might cause issue when you are designing application with highly responsible user interface. Tyrus introduces asynchronous variants to each connectToServer method (prefixed with "async"), which returns Future<Session>. These methods do only simple check for provided URL and the rest is executed in separate thread. All exceptions thrown during this phase are reported as cause of ExecutionException thrown when calling Future<Session>.get().

Asynchronous connect methods are declared on Tyrus implementation of WebSocketContainer called ClientManager.

ClientManager client = ClientManager.createClient();
  final Future<Session> future = client.asyncConnectToServer(ClientEndpoint.class, URI.create("..."));
  try {
  } catch (...) {

ClientManager contains async alternative to each connectToServer method.

8.3. Optimized broadcast

One of the typical usecases we've seen so far for WebSocket server-side endpoints is broadcasting messages to all connected clients, something like:

public void onMessage(Session session, String message) throws IOException {
  for (Session s : session.getOpenSessions()) {

Executing this code might cause serious load increase on your application server. Tyrus provides optimized broadcast implementation, which takes advantage of the fact, that we are sending exactly same message to all clients, so dataframe can be created and serialized only once. Furthermore, Tyrus can iterate over set of opened connections faster than Session.getOpenSession().

public void onMessage(Session session, String message) {
  ((TyrusSession) session).broadcast(message);

Unfortunately, WebSocket API forbids anything else than Session in @OnMessage annotated method parameter, so you cannot use TyrusSession there directly and you might need to perform instanceof check.

8.4. Incoming buffer size

Sevlet container buffers incoming WebSocket frames and there must be a size limit to precede OutOfMemory Exception and potentially DDoS attacks.

Configuration property is named "org.glassfish.tyrus.servlet.incoming-buffer-size" and you can set it in web.xml (this particular snipped sets the buffer size to 17000000 bytes (~16M payload):

<web-app version="2.5" xmlns="" xmlns:xsi=""


Default value is 4194315, which correspond to 4M plus few bytes to frame headers, so you should be able to receive up to 4M long message without the need to care about this property.

Same issue is present on client side. There you can set this property via ClientManager:

ClientManager client = ClientManager.createClient();
client.getProperties().put("org.glassfish.tyrus.incomingBufferSize", 6000000); // sets the incoming buffer size to 6000000 bytes.
client.connectToServer( ... )

8.5. Shared client container

By default, WebSocket client implementation in Tyrus re-creates client runtime whenever WebSocketContainer#connectToServer is invoked. This approach gives us some perks like out-of-the-box isolation and relatively low thread count (currently we have 1 selector thread and 2 worker threads). Also it gives you the ability to stop the client runtime – one Session instance is tied to exactly one client runtime, so we can stop it when Session is closed. This seems as a good solution for most of WebSocket client use cases – you usually use java client from application which uses it for communicating with server side and you typically don’t need more than 10 instances (my personal estimate is that more than 90% applications won’t use more than 1 connection). There are several reasons for it – of it is just a client, it needs to preserve server resources – one WebSocket connection means one TCP connection and we don’t really want clients to consume more than needed. Previous statement may be invalidated by WebSocket multiplexing extension, but for now, it is still valid.

On the other hand, WebSocket client implementations in some other containers took another (also correct) approach – they share client runtime for creating all client connections. That means they might not have this strict one session one runtime policy, they cannot really give user way how he to control system resources, but surely it has another advantage – it can handle much more opened connections. Thread pools are share among client sessions which may or may not have some unforeseen consequences, but if its implemented correctly, it should outperform Tyrus solution mentioned in previous paragraph in some use cases, like the one mentioned in TYRUS-275 - performance tests. Reporter created simple program which used WebSocket API to create clients and connect to remote endpoint and he measured how many clients can he create (or in other words: how many parallel client connections can be created; I guess that original test case is to measure possible number of concurrent clients on server side, but that does not really matter for this post). Tyrus implementation loose compared to some other and it was exactly because it did not have shared client runtime capability.

How can you use this feature?

ClientManager client = ClientManager.createClient();

client.getProperties().put(ClientProperties.SHARED_CONTAINER, true);

You might also want to specify container idle timeout:

client.getProperties().put(ClientProperties.SHARED_CONTAINER_IDLE_TIMEOUT, 5);

Last but not least, you might want to specify thread pool sizes used by shared container (please use this feature only when you do know what are you doing. Grizzly by default does not limit max number of used threads, so if you do that, please make sure thread pool size fits your purpose). Even though the default unlimited thread pool size is sufficient for the vast majority of client usages, it is also important ot note that if the max. thread pool size is not specified and the clients which share the thread pool receive a large number of messages at the same moment, a new thread can be created for each of the received messages which might demand large amount of system resources and might even lead to a program failure if the required resources are not available. Therefore for particularly busy clients setting the max thread pool size can be only recommended. The following example shows how to set the maximal thread poll size.

client.getProperties().put(GrizzlyClientProperties.SELECTOR_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(3));
client.getProperties().put(GrizzlyClientProperties.WORKER_THREAD_POOL_CONFIG, ThreadPoolConfig.defaultConfig().setMaxPoolSize(10));

8.5.1. Custom masking key generator

As a security measure, all frames originating on a websocket client have to be masked with a random 4B value, which must be generated for each frame. Moreover to fully comply with the security requirements of RFC 6455, a masking key of a frame must not be predictable from masking keys of previous frames and therefore Tyrus uses as a default masking key generator. While this is perfectly OK for most Tyrus client use cases, usage of might prove to be a performance issue, when the client is used for instance for highly parallel stress testing, as it uses a synchronized singleton as a random entropy provider in its internals.

To overcome the limitations mentioned above, Tyrus allows replacing the default with more scalable masking key generator. Please, be aware that there might be security implications if you decide not to use a cryptographically secure random number generator in production like the one in the following sample. Moreover the supplied random number generator should be also thread safe. The following example shows, how a custom masking key generator can be configured:

ClientManager client = ClientManager.createClient();
client.getProperties().put(ClientProperties.MASKING_KEY_GENERATOR, new MaskingKeyGenerator() {

    private final Random random = new Random();

    public int nextInt() {
        return random.nextInt();

It is also important to note that the scalability issue connected to the default masking key generator is not limited to the shared container client configuration, but it is discussed in this section as it is assumed that shared container is used for highly parallel clients handling a lot of traffic, where the method of masking key generation starts to matter.

8.6. WebSocket Extensions

Please note that Extensions support is considered to be experimental and any API can be changed anytime. Also, you should ask yourself at least twice whether you don't want to achieve your goal by other means - WebSocket Extension is very powerful and can easily break your application when not used with care or enough expertise.

WebSocket frame used in ExtendedExtension:

public class Frame {

    public boolean isFin() { .. }
    public boolean isRsv1() { .. }
    public boolean isRsv2() { .. }
    public boolean isRsv3() { .. }
    public boolean isMask() { .. }
    public byte getOpcode() { .. }
    public long getPayloadLength() { .. }
    public int getMaskingKey() { .. }
    public byte[] getPayloadData() { .. }
    public boolean isControlFrame() { .. }

    public static Builder builder() { .. }
    public static Builder builder(Frame frame) { .. }

    public final static class Builder {

    public Builder() { .. }
    public Builder(Frame frame) { .. }
    public Frame build() { .. }
    public Builder fin(boolean fin) { .. }
    public Builder rsv1(boolean rsv1) { .. }
    public Builder rsv2(boolean rsv2) { .. }
    public Builder rsv3(boolean rsv3) { .. }
    public Builder mask(boolean mask) { .. }
    public Builder opcode(byte opcode) { .. }
    public Builder payloadLength(long payloadLength) { .. }
    public Builder maskingKey(int maskingKey) { .. }
    public Builder payloadData(byte[] payloadData) { .. }

Frame is immutable, so if you want to create new one, you need to create new builder, modify what you want and build it:

Frame newFrame = Frame.builder(originalFrame).rsv1(true).build();

Note that there is only one convenience method: isControlFrame. Other information about frame type etc needs to be evaluated directly from opcode, simply because there might not be enough information to get the correct outcome or the information itself would not be very useful. For example: opcode 0×00 means continuation frame, but you don’t have any chance to get the information about actual type (text or binary) without intercepting data from previous frames. Consider Frame class as raw representation as possible. isControlFrame() can be also gathered from opcode, but it is at least always deterministic and it will be used by most of extension implementations. It is not usual to modify control frames as it might end with half closed connections or unanswered ping messages.

ExtendedExtension representation needs to be able to handle extension parameter negotiation and actual processing of incoming and outgoing frames. It also should be compatible with existing javax.websocket.Extension class, since we want to re-use existing registration API and be able to return new extension instance included in response from List<Extension> Session.getNegotiatedExtensions() call. Consider following:

public interface ExtendedExtension extends Extension {

    Frame processIncoming(ExtensionContext context, Frame frame);
    Frame processOutgoing(ExtensionContext context, Frame frame);

    List onExtensionNegotiation(ExtensionContext context, List requestedParameters);
    void onHandshakeResponse(ExtensionContext context, List responseParameters);

    void destroy(ExtensionContext context);

    interface ExtensionContext {

        Map<String, Object> getProperties();

ExtendedExtension is capable of processing frames and influence parameter values during the handshake. Extension is used on both client and server side and since the negotiation is only place where this fact applies, we needed to somehow differentiate these sides. On server side, only onExtensionNegotiation(..) method is invoked and on client side onHandshakeResponse(..). Server side method is a must, client side could be somehow solved by implementing ClientEndpointConfig.Configurator#afterResponse(..) or calling Session.getNegotiatedExtenions(), but it won’t be as easy to get this information back to extension instance and even if it was, it won’t be very elegant. Also, you might suggest replacing processIncoming and processOutgoing methods by just oneprocess(Frame) method. That is also possible, but then you might have to assume current direction from frame instance or somehow from ExtensionContext, which is generally not a bad idea, but it resulted it slightly less readable code.

ExtensionContext and related lifecycle method is there because original javax.websocket.Extension is singleton and ExtendedExtension must obey this fact. But it does not meet some requirements we stated previously, like per connection parameter negotiation and of course processing itself will most likely have some connection state. Lifecycle of ExtensionContext is defined as follows: ExtensionContext instance is created right before onExtensionNegotiation (server side) or onHandshakeResponse (client side) and destroyed after destroy method invocation. Obviously, processIncoming or processOutgoing cannot be called before ExtensionContext is created or after is destroyed. You can think of handshake related methods as @OnOpenand destroy as @OnClose.

For those more familiar with WebSocket protocol: process*(ExtensionContext, Frame) is always invoked with unmasked frame, you don’t need to care about it. On the other side, payload is as it was received from the wire, before any validation (UTF-8 check for text messages). This fact is particularly important when you are modifying text message content, you need to make sure it is properly encoded in relation to other messages, because encoding/decoding process is stateful – remainder after UTF-8 coding is used as input to coding process for next message. If you want just test this feature and save yourself some headaches, don’t modify text message content or try binary messages instead.

8.6.1. ExtendedExtension sample

Let’s say we want to create extension which will encrypt and decrypt first byte of every binary message. Assume we have a key (one byte) and our symmetrical cipher will be XOR. (Just for simplicity (a XOR key XOR key) = a, so encrypt() and decrypt() functions are the same).

public class CryptoExtension implements ExtendedExtension {

    public Frame processIncoming(ExtensionContext context, Frame frame) {
         return lameCrypt(context, frame);

    public Frame processOutgoing(ExtensionContext context, Frame frame) {
        return lameCrypt(context, frame);

    private Frame lameCrypt(ExtensionContext context, Frame frame) {
        if(!frame.isControlFrame() && (frame.getOpcode() == 0x02)) {
            final byte[] payloadData = frame.getPayloadData();
            payloadData[0] ^= (Byte)(context.getProperties().get("key"));

            return Frame.builder(frame).payloadData(payloadData).build();
        } else {
            return frame;

    public List onExtensionNegotiation(ExtensionContext context,
                                       List requestedParameters) {
        // no params.
        return null;

    public void onHandshakeResponse(ExtensionContext context,
    List responseParameters) {

    private void init(ExtensionContext context) {
        context.getProperties().put("key", (byte)0x55);

    public void destroy(ExtensionContext context) {

    public String getName() {
        return "lame-crypto-extension";

    public List getParameters() {
        // no params.
        return null;

You can see that ExtendedExtension is slightly more complicated that original Extension so the implementation has to be also not as straightforward.. on the other hand, it does something. Sample code above shows possible simplification mentioned earlier (one process method will be enough), but please take this as just sample implementation. Real world case is usually more complicated.

Now when we have our CryptoExtension implemented, we want to use it. There is nothing new compared to standard WebSocket Java API, feel free to skip this part if you are already familiar with it. Only programmatic version will be demonstrated. It is possible to do it for annotated version as well, but it is little bit more complicated on the server side and I want to keep the code as compact as possible.

Client registration

ArrayList extensions = new ArrayList();
extensions.add(new CryptoExtension());

final ClientEndpointConfig clientConfiguration =

WebSocketContainer client = ContainerProvider.getWebSocketContainer();
final Session session = client.connectToServer(new Endpoint() {
    public void onOpen(Session session, EndpointConfig config) {
        // ...
}, clientConfiguration, URI.create(/* ... */));

Server registration:

public class CryptoExtensionApplicationConfig implements ServerApplicationConfig {

    public Set getEndpointConfigs(Set<Class<? extends Endpoint>> endpointClasses) {
        Set endpointConfigs = new HashSet();
            ServerEndpointConfig.Builder.create(EchoEndpoint.class, "/echo")
            .extensions(Arrays.asList(new CryptoExtension())).build()
        return endpointConfigs;

    public Set<Class<?>> getAnnotatedEndpointClasses(Set<Class<?>> scanned) {
        // all scanned endpoints will be used.
        return scanned;

public class EchoEndpoint extends Endpoint {
    public void onOpen(Session session, EndpointConfig config) {
        // ...

CryptoExtensionApplicationConfig will be found by servlets scanning mechanism and automatically used for application configuration, no need to add anything (or even have) web.xml.

8.6.2. Per Message Deflate Extension

The original goal of whole extension support was to implement Permessage extension as defined in draft-ietf-hybi-permessage-compression-15 and we were able to achieve that goal. Well, not completely, current implementation ignores parameters. But it seems like it does not matter much, it was tested with Chrome and it works fine. Also it passes newest version of Autobahn test suite, which includes tests for this extension.

see (compatible with draft-ietf-hybi-permessage-compression-15, autobahn test suite) and (compatible with Chrome and Firefox – same as previous, just different extension name)

8.7. Client reconnect

If you need semi-persistent client connection, you can always implement some reconnect logic by yourself, but Tyrus Client offers useful feature which should be much easier to use. See short sample code:

ClientManager client = ClientManager.createClient();
ClientManager.ReconnectHandler reconnectHandler = new ClientManager.ReconnectHandler() {

  private int counter = 0;

  public boolean onDisconnect(CloseReason closeReason) {
    if (counter <= 3) {
      System.out.println("### Reconnecting... (reconnect count: " + counter + ")");
      return true;
    } else {
      return false;

  public boolean onConnectFailure(Exception exception) {
    if (counter <= 3) {
      System.out.println("### Reconnecting... (reconnect count: " + counter + ") " + exception.getMessage());

      // Thread.sleep(...) or something other "sleep-like" expression can be put here - you might want
      // to do it here to avoid potential DDoS when you don't limit number of reconnects.
      return true;
    } else {
      return false;

  public long getDelay() {
    return 1;

client.getProperties().put(ClientProperties.RECONNECT_HANDLER, reconnectHandler);


ReconnectHandler contains three methods, onDisconnect, onConnectFailure and getDelay. First will be executed whenever @OnClose annotated method (or Endpoint.onClose(..)) is executed on client side - this should happen when established connection is lost for any reason. You can find the reason in methods parameter. Other one, called onConnectFailure is invoked when client fails to connect to remote endpoint, for example due to temporary network issue or current high server load. Method getDelay is called after any of previous methods returns true and the returned value will be used to determine delay before next connection attempt. Default value is 5 seconds.

8.8. Client behind proxy

Tyrus client supports traversing proxies, but it is Tyrus specific feature and its configuration is shown in the following code sample:

ClientManager client = ClientManager.createClient();
client.getProperties().put(ClientProperties.PROXY_URI, "");

Value is expected to be proxy URI. Protocol part is currently ignored, but must be present.

8.9. JDK 7 client

As has been said in previous chapters both Tyrus client and server were implemented on top of Grizzly NIO framework. This still remains true, but an alternative Tyrus Websocket client implementation based on Java 7 Asynchronous Channel API has been available since version 1.6. There are two options how to switch between client implementations. If you do not mind using Tyrus specific API, the most straightforward way is to use:

final ClientManager client = ClientManager.createClient(JdkClientContainer.class.getName());

You just have to make sure that the dependency on JDK client is included in your project:


Grizzly client is the default option, so creating a client without any parameters will result in Grizzly client being used.

There is also an option how to use JDK client with the standard Websocket API.

final WebSocketContainer client = ContainerProvider.getWebSocketContainer();

The code listed above will scan class path for Websocket client implementations. A slight problem with this approach is that if there is more than one client on the classpath, the first one discovered will be used. Therefore if you intend to use JDK client with the standard API, you have to make sure that there is not a Grizzly client on the classpath as it might be used instead.

The main reason why JDK client has been implemented is that it does not have any extra dependencies except JDK 7 and of course some other Tyrus modules, which makes it considerable more lightweight compared to Tyrus Grizzly client, which requires 1.4 MB of dependencies.

It is also important to note that the JDK client has been implemented in a way similar to Grizzly client shared container option, which means that there is one thread pool shared among all clients.

Proxy configuration for JDK client is the same as for Grizzly client shown above.

8.9.1. SSL configuration

Alike in case of Grizzly client, accessing "wss" URLs will cause Tyrus client to pick up whatever keystore and trust store is actually set for the current JVM instance. However, specifying SSL parameters to be used with JDK client instance is little different from Grizzly client, because Grizzly client supports both SSLEngineConfigurator end SSLContextConfigurator from Grizzly project and SslEngineConfigurator and SslContextConfigurator from Tyrus project, but JDK client supports only the Tyrus version of these classes. The following code sample shows an example of some SSL parameters configuration for the JDK client:

SslContextConfigurator sslContextConfigurator = new SslContextConfigurator();
SslEngineConfigurator sslEngineConfigurator = new SslEngineConfigurator(sslContextConfigurator, true, false, false);

client.getProperties().put(ClientProperties.SSL_ENGINE_CONFIGURATOR, sslEngineConfigurator);

8.10. Tracing support

Apart from logging, Tyrus supports another useful means for debugging and diagnosing a deployed application which will be referred to as tracing on the following lines. Tracing consists of including vital information about handshake handling into a handshake response. The provided information includes among other things an insight into how Tyrus matches handshake request URI against the URI of the deployed endpoints and how the best matching endpoint is selected. The tracing information is included in a handshake response as a content of HTTP headers with X-Tyrus-Tracing- as the header names prefix. All the tracing information will also be available in the server log if the appropriate logging level is set. If it is still unclear, how Tyrus tracing works, please refer to the subsection with title Tracing Examples.

8.10.1. Configuration

Tracing support is disabled by default. You can enable it either "globally" for all application handshake requests or selectively per handshake request. The tracing support activation is controlled by setting the org.glassfish.tyrus.server.tracingType configuration property. The property value is expected to be one of the following:

  • OFF - tracing support is disabled (default value).

  • ON_DEMAND - tracing support is in a stand-by mode; it is enabled selectively per handshake, via a special X-Tyrus-Tracing-Accept HTTP header in a handshake request.

  • ALL - tracing support is enabled for all handshake requests.

The level of detail of the information provided by Tyrus tracing facility - the tracing threshold - can be customized. The tracing threshold can be set at the application level via org.glassfish.tyrus.server.tracingThreshold application configuration property in both Glassfish and Grizzly as will be shown in the following samples, or at a request level, via X-Tyrus-Tracing-Threshold HTTP header in a handshake request. The request-level configuration overrides any application level setting. There are 2 supported levels of detail for Tyrus tracing:

  • SUMMARY - very basic summary information about handshake processing

  • TRACE - detailed information about handshake processing (default threshold value). Global configuration examples

As has been already said, tracing is disabled by default. The following code sample shows, how ON_DEMAND tracing with level set to SUMMARY can be enabled on Grizzly server:

                            serverProperties.put(TyrusWebSocketEngine.TRACING_TYPE, ON_DEMAND);
                            serverProperties.put(TyrusWebSocketEngine.TRACING_THRESHOLD, SUMMARY);

Similarly ALL tracing with level set to TRACE (the default) can be enabled on Glassfish server in web.xml:

<web-app version="2.5" xmlns="" xmlns:xsi=""


It has been also already mentioned that the tracing threshold configured on application level can be overridden per handshake request as will be shown in the following section. Configuring tracing via handshake request headers

Whenever the tracing support is active (ON_DEMAND or ALL) you can customize the tracing behaviour by including one or more of the following request HTTP headers in the individual handshake requests:

  • X-Tyrus-Tracing-Accept - used to enable the tracing support for the particular request. It is applied only when the application-level tracing support is configured to ON_DEMAND mode. The value of the header is not used by the Tyrus tracing facility and as such it can be any arbitrary (even empty) string.

  • X-Tyrus-Tracing-Threshold - used to override the tracing threshold. Allowed values are: SUMMARY, TRACE.

8.10.2. Tracing Examples

An example of a handshake request to a server in ON_DEMAND tracing mode requesting SUMMARY tracing information:

  1 GET /endpoint/b HTTP/1.1
  2 Connection: Upgrade
  3 Host: localhost:8025
  4 Origin: localhost:8025
  5 Sec-WebSocket-Key: YrFldD8nhRW+6hJ2K/TMqw==
  6 Sec-WebSocket-Version: 13
  7 Upgrade: websocket
  8 X-Tyrus-Tracing-Accept: Whatever
  9 X-Tyrus-Tracing-Threshold: SUMMARY

An example of a possible response to the request above:

  1 HTTP/1.1 404 Not found
  2 x-tyrus-tracing-00 : [0 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/{a}/b
  3 x-tyrus-tracing-01 : [0 ms] URIs /samples-debug/endpoint/b and /samples-debug/endpoint/{a}/b have different length
  4 x-tyrus-tracing-02 : [0 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/{a}/{b}
  5 x-tyrus-tracing-03 : [0 ms] URIs /samples-debug/endpoint/b and /samples-debug/endpoint/{a}/{b} have different length
  6 x-tyrus-tracing-04 : [0 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/a/b
  7 x-tyrus-tracing-05 : [1 ms] URIs /samples-debug/endpoint/b and /samples-debug/endpoint/a/b have different length
  8 x-tyrus-tracing-06 : [1 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/a/a
  9 x-tyrus-tracing-07 : [1 ms] URIs /samples-debug/endpoint/b and /samples-debug/endpoint/a/a have different length
 10 x-tyrus-tracing-08 : [1 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/a
 11 x-tyrus-tracing-09 : [1 ms] Segment "a" does not match
 12 x-tyrus-tracing-10 : [1 ms] Matching request URI /samples-debug/endpoint/b against /samples-debug/endpoint/a/{b}
 13 x-tyrus-tracing-11 : [1 ms] URIs /samples-debug/endpoint/b and /samples-debug/endpoint/a/{b} have different length
 14 x-tyrus-tracing-12 : [3 ms] Endpoints matched to the request URI: []

The time in the square brackets in the sample above is the time since the handshake request has been received.

An example of a possible handshake response from a server in ALL tracing mode with tracing threshold set to TRACE:

  1 HTTP/1.1 101
  2 connection: Upgrade
  3 sec-websocket-accept: C8/QbF4Mx9sX31sihUcnI19yqto=
  4 upgrade: websocket
  5 x-tyrus-tracing-00 : [0 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/{a}/b
  6 x-tyrus-tracing-01 : [0 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/{a}/{b}
  7 x-tyrus-tracing-02 : [0 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/a/b
  8 x-tyrus-tracing-03 : [1 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/a/a
  9 x-tyrus-tracing-04 : [1 ms] Segment "a" does not match
 10 x-tyrus-tracing-05 : [1 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/a
 11 x-tyrus-tracing-06 : [1 ms] URIs /samples-debug/endpoint/a/b and /samples-debug/endpoint/a have different length
 12 x-tyrus-tracing-07 : [1 ms] Matching request URI /samples-debug/endpoint/a/b against /samples-debug/endpoint/a/{b}
 13 x-tyrus-tracing-08 : [3 ms] Choosing better match from /samples-debug/endpoint/{a}/b and /samples-debug/endpoint/a/b
 14 x-tyrus-tracing-09 : [3 ms] /samples-debug/endpoint/a/b is an exact match
 15 x-tyrus-tracing-10 : [3 ms] Choosing better match from /samples-debug/endpoint/a/{b} and /samples-debug/endpoint/{a}/b
 16 x-tyrus-tracing-11 : [3 ms] /samples-debug/endpoint/a/{b} is a  better match, because it has longer exact path
 17 x-tyrus-tracing-12 : [3 ms] Choosing better match from /samples-debug/endpoint/a/{b} and /samples-debug/endpoint/{a}/b
 18 x-tyrus-tracing-13 : [3 ms] /samples-debug/endpoint/a/{b} is a  better match, because it has longer exact path
 19 x-tyrus-tracing-14 : [3 ms] Choosing better match from /samples-debug/endpoint/a/{b} and /samples-debug/endpoint/a/b
 20 x-tyrus-tracing-15 : [3 ms] /samples-debug/endpoint/a/b is an exact match
 21 x-tyrus-tracing-16 : [3 ms] Choosing better match from /samples-debug/endpoint/{a}/{b} and /samples-debug/endpoint/a/{b}
 22 x-tyrus-tracing-17 : [4 ms] /samples-debug/endpoint/a/{b} is a  better match, because it has longer exact path
 23 x-tyrus-tracing-18 : [4 ms] Choosing better match from /samples-debug/endpoint/{a}/{b} and /samples-debug/endpoint/{a}/b
 24 x-tyrus-tracing-19 : [4 ms] /samples-debug/endpoint/{a}/b is a  better match, because /samples-debug/endpoint/{a}/{b} has more variables
 25 x-tyrus-tracing-20 : [4 ms] Endpoints matched to the request URI: [/samples-debug/endpoint/a/b, /samples-debug/endpoint/a/{b}, /samples-debug/endpoint/{a}/b, /samples-debug/endpoint/{a}/{b}]
 26 x-tyrus-tracing-21 : [4 ms] Endpoint selected as a match to the handshake URI: /samples-debug/endpoint/a/b 

8.11. Client handshake request and response logging

Tyrus client offers a possibility, how to enable printing of handshake requests and responses to standard output without having to configure Java logging, which is essential when debugging a misbehaving websocket application. This feature is particularly useful with tracing enabled. The following sample shows, how the handshake logging can be enabled:

                ClientManager client = ClientManager.createClient();
                client.getProperties().put(ClientProperties.LOG_HTTP_UPGRADE, true);

8.12. JMX Monitoring

Tyrus allows monitoring and accessing some runtime properties and metrics at the server side using JMX (Java management extension technology). The monitoring API has been available since version 1.6 and the following properties are available at runtime through MXBeans. Number of open sessions, maximal number of open session since the start of monitoring and list of deployed endpoint class names and paths are available for each application. Endpoint class name and path the endpoint is registered on, number of open session and maximal number of open sessions are available for each endpoint. Apart from that message as well as error statistics are collected both per application and per individual endpoint.

The following message statistics are monitored for both sent and received messages:

  • messages count

  • messages count per second

  • average message size

  • smallest message size

  • largest message size

Moreover all of them are collected separately for text, binary and control messages and apart from the statistics being available for the three separate categories, total numbers summing up statistics from the three types of messages are also available.

As has been already mentioned above, Tyrus also monitors errors on both application and endpoint level. An error is identified by the Throwable class name that has been thrown. Statistics are collected about number of times each Throwable has been thrown, so a list of errors together with a number of times each error occurred is available on both application and endpoint level. The monitored errors correspond to invocation of @OnError method on an annotated endpoint or its equivalent on a programmatic endpoint (The invocation of @OnError method is just an analogy and an error will be monitored even if no @OnError method is provided on the endpoint). Errors that occur in @OnOpen, @OnClose methods and methods handling incoming messages are monitored. Errors that occurred during handshake will not be among the monitored errors.

The collected metrics as well as the endpoint properties mentioned above are accessible at runtime through Tyrus MXBeans. As has been already mention the information is available on both application and endpoint level with each application or endpoint being represented with four MXBeans. One of those MXBeans contains total message statistics for both sent and received messages as well as any properties specific for applications or endpoints such as endpoint path in the case of an endpoint. The other three MXBeans contain information about sent and received text, binary and control messages.

When a user connects to a tyrus application MBean server using an JMX client such as JConsole, they will see the following structure:

  • Application 1 - MXBean containing a list of deployed endpoint class names and paths, number of open sessions, maximal number of open sessions, error and total message statistics for the application.

    • message statistics - a directory containing message statistics MXBeans

      • text - MXBean containing text message statistics

      • binary - MXBean containing binary message statistics

      • control - MXBean containing control message statistics

    • endpoints - a directory containing application endpoint MXBeans

      • Endpoint 1 - MXBean containing Endpoint 1 class name and path, number of open sessions, maximal number of open sessions, error and total message statistics for the endpoint.

        • text - MXBean containing text message statistics

        • binary - MXBean containing binary message statistics

        • control - MXBean containing control message statistics

      • Endpoint 2

  • Application 2

In fact the monitoring structure described above was a little bit simplistic, because there is an additional monitoring level available, which causes message metrics being also available per session. The monitoring structure is very similar to the one described above, with a small difference that there are four MXBeans registered for each session, which contain text, binary, control and total message statistics. In order to distinguish the two monitoring levels, they will be referred to as endpoint-level monitoring and session-level monitoring.

8.12.1. Configuration

As has been already mentioned, monitoring is supported only on the server side and is disabled by default. The following code sample shows, how endpoint-level monitoring can be enabled on Grizzly server:

serverProperties.put(ApplicationEventListener.APPLICATION_EVENT_LISTENER, new SessionlessApplicationMonitor());

Similarly endpoint-level monitoring can be enabled on Grizzly server in the following way:

serverProperties.put(ApplicationEventListener.APPLICATION_EVENT_LISTENER, new SessionAwareApplicationMonitor());

Monitoring can be configured on Glassfish in web.xml and the following code sample shows endpoint-level configuration:

<web-app version="2.5" xmlns="" xmlns:xsi=""


Similarly session-level monitoring can be configured on Glassfish in web.xml in the following way:

<web-app version="2.5" xmlns="" xmlns:xsi=""


8.13. Maximal number of open sessions on server-side

Tyrus offers a few ways to limit the number of open sessions, which can be used to save limited resources on a server hosting system. The limits can be configured in several scopes:

  • per whole application
  • per endpoint
  • per remote address (client IP address)

If the number of simultaneously opened sessions exceeds any of these limits, Tyrus will close the session with close code 1013 - Try Again Later.

Limits mentioned above can be combined together. For example, let's say we have an application with two endpoints. Overall limit per application will be 1000 open sessions and the first one, non-critical endpoint, will be limited to 75 open sessions at maximum. So we know that the second endpoint can handle 925-1000 opened sessions, depends on how many open sessions are connected to the first endpoint (0-75).

8.13.1. Maximal number of open sessions per application

This configuration property can be used to limit overall number of open sessions per whole application. The main purpose of this configurable limit is to restrict how many resources the application can consume.

The number of open sessions per whole application can be configured by setting property org.glassfish.tyrus.maxSessionsPerApp. Property can be used as <context-param> in web.xml or as an entry in parameter map in (standalone) Server properties.

Note that only positive integer is allowed.

This example will set maximal number of open sessions per whole application to 500:

<web-app version="2.5" xmlns=""


8.13.2. Maximal number of open sessions per remote address

The number of open sessions per remote address can be configured by setting property org.glassfish.tyrus.maxSessionsPerRemoteAddr. Property can be used as <context-param> in web.xml or as an entry in parameter map in (standalone) Server properties.

Remote address value is obtained from ServletRequest#getRemoteAddr() or its alternative when using Grizzly server implementation. Beware that this method returns always the last node which sending HTTP request, so all clients behind one proxy will be treated as clients from single remote address.

Note that only positive integer is allowed.

This example will set maximal number of open sessions from unique IP address or last proxy to 5:

<web-app version="2.5" xmlns="" xmlns:xsi=""


8.13.3. Maximal number of open sessions per endpoint

Set maximum number of sessions in annotated endpoint:

import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

import org.glassfish.tyrus.core.MaxSessions;

 * Annotated endpoint.
@ServerEndpoint(value = "/limited-sessions-endpoint")
public static class LimitedSessionsEndpoint {
    public void onOpen(Session s) {

Set maximum number of sessions for programmatic endpoint:


Note that only positive integer is allowed.

8.14. Client HTTP Authentication

For server endpoints which is protected by HTTP authentication, Tyrus provides a mechanism to authenticate client. When client receives HTTP response status code401 - Unauthorized, then Tyrus extracts required scheme from WWW-Authenticate challenge. Then it chooses an authenticator from a map of registered authenticators and uses configuredcredentials. If no proper authenticator is found or credentials are missing, then AuthenticationException is thrown before the handshake can be done. There are implementations of the two most used authentication schemes in Tyrus: BASIC and DIGEST, but it is also possible to implement your own authenticator and register it with a configuration builder org.glassfish.tyrus.client.auth.AuthConfig.Builder or even to override default BASIC or DIGEST auth implementations. If no org.glassfish.tyrus.client.auth.AuthConfig client property is set, then default configuration is used. It is constructed as you can see bellow: BASIC and DIGEST internal implementations are enabled by default.

Please note that Basic Authentication scheme should be used over HTTPS connection only.

8.14.1. Credentials

Credentials are required for both implemented authentication schemes in Tyrus. You can pass an instance into ClientManager as a property:

  client.getProperties().put(ClientProperties.CREDENTIALS, new Credentials("ws_user", "password".getBytes(AuthConfig.CHARACTER_SET));

8.14.2. Auth Configuration

org.glassfish.tyrus.client.auth.AuthConfig provides a way to configure of HTTP authentication schemes. Creating an instance of org.glassfish.tyrus.client.auth.AuthConfig is optional. If you don't specify AuthConfig, then default instance will be created like in following code listing

  AuthConfig authConfig = AuthConfig.Builder.create().build();
  ClientManager client = ClientManager.createClient();
  client.getProperties().put(ClientProperties.AUTH_CONFIG, authConfig);

If authentication is required after an initial upgrade request, Tyrus chooses a proper authentication scheme based on a received challenge from server. There are two HTTP authentication scheme implemented and registered by default.

8.14.3. User defined authenticator

Tyrus provides an option to implement your own client HTTP authenticator by extending org.glassfish.tyrus.client.auth.Authenticator and implementinggenerateAuthorizationHeader. Request URI, WWW-Authenticate response header and provided Credentials are passed as parameters. Method must return response to authentication challenge as it is required by HTTP server. An instance of the implemented class must be passed to the Tyrus configuration with org.glassfish.tyrus.client.auth.AuthConfig.Builder#setAuthScheme(String scheme, Authenticator userDefinedAuthenticator) and created AuthConfig instance must be put into client properties.

package org.glassfish.tyrus.client;


 * Http Authentication provider.
 * Class generates authorization token as a input for {@code Authorization} HTTP request header.
 * @author Ondrej Kosatka (ondrej.kosatka at
public abstract class Authenticator {

     * Generates authorization tokens as a input for {@code Authorization} HTTP request header.
     * @param uri URI is needed for generating authorization tokens for some authentication scheme (DIGEST: {@link DigestAuthenticator})
     * @param wwwAuthenticateHeader a value of header {@code WWW-Authenticate} from HTTP response.
     * @param credentials credentials.
     * @return generated {@link String} value of {@code Authorization}.
     * @throws AuthenticationException if is not possible to create auth token.
    public abstract String generateAuthorizationHeader(final URI uri, final String wwwAuthenticateHeader, final Credentials credentials) throws AuthenticationException;


8.14.4. Examples

The simplest way to setup Tyrus authentication is by adding client property ClientProperties.CREDENTIALS

  client.getProperties().put(ClientProperties.CREDENTIALS, new Credentials("ws_user", "password");

How to configure Tyrus with suppressing Basic authentication, even if server side challenges Basic authentication scheme.

  AuthConfig authConfig = AuthConfig.Builder.create().
  Credentials credentials = new Credentials("ws_user", "password");
  client.getProperties().put(ClientProperties.AUTH_CONFIG, authConfig);
  client.getProperties().put(ClientProperties.CREDENTIALS, credentials);

How to configure Tyrus using user defined DIGEST authentication and Tyrus Basic authentication. User defined authentication provider MyOwnDigestAuthenticator must extendorg.glassfish.tyrus.client.auth.Authenticator.

  AuthConfig authConfig = AuthConfig.Builder.create().
                               putAuthProvider("Digest", new MyOwnDigestAuthenticator()).
  Credentials credentials = new Credentials("ws_user", "password");
  client.getProperties().put(ClientProperties.AUTH_CONFIG, authConfig);
  client.getProperties().put(ClientProperties.CREDENTIALS, credentials);

How to configure Tyrus using user defined NTLM authentication and suppress Tyrus Basic authentication, even if server side challenges Basic authentication scheme.. User defined authentication provider MyOwnNTLMAuthenticator must extendorg.glassfish.tyrus.client.auth.Authenticator.

  AuthConfig authConfig = AuthConfig.Builder.create().
                               putAuthProvider("NTLM", new MyOwnNTLMAuthenticator()).
  Credentials credentials = new Credentials("ws_user", "password");
  client.getProperties().put(ClientProperties.AUTH_CONFIG, authConfig);
  client.getProperties().put(ClientProperties.CREDENTIALS, credentials);

8.15. Client HTTP Redirect

Another Tyrus feature is HTTP redirect. If client received 3xx HTTP Redirect response code during a handshake and HTTP Redirect is allowed (by ClientProperty.REDIRECT_ENABLED property) then client engine transparently follows the URI contained in received HTTP response header Location and sends upgrade request to the new URI. Redirects can be chained up to limit set in ClientProperty.REDIRECT_THRESHOLD, whilst default value is 5. If HTTP redirect failed by any reason, RedirectException is thrown.

8.15.1. Supported HTTP response codes

List of 3xx HTTP response codes which can be automatically redirect

  • 300 - Multiple Choices

  • 301 - Moved permanently

  • 302 - Found

  • 303 - See Other (since HTTP/1.1)

  • 307 - Temporary Redirect (since HTTP/1.1)

  • 308 - Permanent Redirect (Experimental RFC; RFC 7238)

8.15.2. Configuration Enabling

For enabling HTTP Redirect feature, ClientProperty.REDIRECT_ENABLED must be explicitly set to true (default value isfalse), otherwise RedirectException will be thrown, when any of supported HTTP Redirect response codes (see above).

                    client.getProperties().put(ClientProperties.REDIRECT_ENABLED, true);

ClientProperty.REDIRECT_THRESHOLD is property which can be used to limit maximal number of chained redirect. Positive integer is expected and default value is 5.

                    client.getProperties().put(ClientProperties.REDIRECT_THRESHOLD, 3);

8.15.3. Exception handling

RedirectException is set as a cause of DeploymentException when any of the supported Redirection HTTP response status codes (see above) was received and WebSocketContainer.connectToServer(...) fails because of any of the following reasons:

  • ClientProperties.REDIRECT_ENABLED property is not set to true.

  • Value of ClientProperties.REDIRECT_THRESHOLD is not assignable to Integer.

  • Number of chained redirection exceeds a value of ClientProperties.REDIRECT_THRESHOLD (default value is 5).

  • Infinite redirection loop is detected.

  • Location response header is missing, is empty or does not contain a valid URI.

8.16. Client support for HTTP status 503 - Service Unavailable with Retry-After header

Tyrus offers automatic handling of HTTP status code 503 - Service Unavailable, which can be returned from server when temporarily overloaded or down for maintenance. When Retry-After header is included in the response, client will parse the value and schedule another reconnect attempt.

This feature is disabled by default.

The implementation limits connection attempts to 5, each with reconnect delay not bigger than 300 seconds. Other values or conditions can be handled by custom ReconnectHandler (see RetryAfterException).

8.16.1. Configuration

            final ClientManager client = ClientManager.createClient();
            client.getProperties().put(ClientProperties.RETRY_AFTER_SERVICE_UNAVAILABLE, true);