New full duplex HTTP tunnel implementation (client and server)

https://issues.jboss.org/browse/NETTY-246?page=com.atlassian.jirafisheyeplugin:fisheye-issuepanel

——————————————————————————————————————————————————————————————————

As a contributor to the JXTA peer to peer networking framework, I have been tasked with replacing the HTTP based transport that exists in JXTA. I had recently completed the work to rewrite the TCP based transport to use Netty, and felt it would be a good idea to use Netty for HTTP too. However, I found the existing HTTP tunnel to be unsatisfactory, mostly because it required a servlet container on the server side.

As such, I have create a new full duplex HTTP tunnel using Netty's low level HTTP support. It has been tested in relatively simple setups to date and there are a number of known areas for improvement or fixing, as I would not yet consider myself to be an expert in Netty's inner workings. The implementation uses two TCP connections per channel (two SocketChannels, in fact), one used for sending data from client to server (the "send" channel), the other used for sending data from server to client (the "poll" channel). The technique used is known as Comet, and is necessary to allow usage via a proxy - something which has not yet been implemented but has been planned. There are other intricacies in the implementation, such as an enforced upper limit of 16KB per message body, which is intended to average out the per-byte throughput rather than delivering highly inconsistent payloads. It is debatable whether this works in practice, and is something I would encourage the wider community to discuss and help choose an appropriate threshold.

Issues with the implementation:

1. No proxy support. This isn't difficult, but has not been implemented yet. If this hasn't been picked up by others in the community within the next week, I will likely implement this as a patch extension.
2. Poor connection close handling. The tunnel should be resilient to
requests, forceful or polite, to close a connection that is servicing
the tunnel. While HTTP 1.1 connections can be long lived, some proxies
will close them frequently / at regular intervals.
3. Write fragmenting / aggregating. As mentioned above, an upper limit
is imposed on the size of message bodies sent in each direction. While
the implementation works enforcing the upper limit, it does not attempt
to reaggregate smaller fragments into larger bodies. As a result, the
size of the message body sent may be wastefully small when there is
other data in the outgoing queue.
4. Dealing with the synchronous nature of HTTP. While HTTP 1.1 should
allow pipelining and full duplex communications, most firewalls /
proxies don't like this and therefore we should adhere to a strict
request / response ordering. The implementation does this, but I am
unsure whether the implementation is a good expression of this logic in
netty.
5. Correct handling of the various event types in Netty. This is
probably the part I am least comfortable with. I make an effort to
honour the semantics of ChannelFutures on writes, and deal with
bind/unbind/connect/disconnect, but I am unsure whether these are
correct. In depth review would be greatly beneficial here.
6. Unit tests use jmock - jmock is what I am used to in previous
projects, but I see EasyMock is the testing tool of choice here. The
jmock tests do not appear to run well within maven build of Netty, so
these tests may need to be rewritten using EasyMock to pass QA.

I may think of other issues as time goes on, but for now it is
probably best if we get some wider exposure of this work and some
testing in environments other than my own.

上一篇:并发编程概述 委托(delegate) 事件(event) .net core 2.0 event bus 一个简单的基于内存事件总线实现 .net core 基于NPOI 的excel导出类,支持自定义导出哪些字段 基于Ace Admin 的菜单栏实现 第五节:SignalR大杂烩(与MVC融合、全局的几个配置、跨域的应用、C/S程序充当Client和Server)


下一篇:jvm的client和server