Non-blocking(NIO)Server Push and Servlet 3
在我的前一篇文章写道如何期待成熟的使用node.js。假定有一个框架,基于该框架,开发者只需要定义协议及相关的handlers,并把精力放到有用的业务逻辑上,用你之前已在Java EE中熟练使用的方式进行业务。在那该文章中,我开始接触一种称为Comet的技术。曾经,我以为在传统的基于HTTP的web应用中,非阻塞服务(non-blocing server)并不比阻塞服务具有优势,所以设计了自己的协议和一个VOIP服务,用于给当前连接着的clients输出二进制数据(streaming binary data)。
现在通过对Comet的了解,我意识到在web应用中,non-blocking服务确实有非常棒的使用场景:给client推送数据,例如在股票网站上给client推送最新的股票价格。这种场景可通过轮询(poll)方案解决,而Comet使用长轮询(long-polling)或者更好的方案:服务端数据推送(full on push)。这有篇不错的入门指引,Comet的的智慧在于,当client发送请求后,server并不立即返回数据,而是打开连接,在后面的某个时刻才返回数据,甚至可能返回多次数据。Comet并不是新出现的概念,好像2006年就已经有人提到,我看到这个词是2009年,因此我认为自己是彗星聚会的迟到者。
基于non-blocking HTTP的服务端推送技术驱使着我的好奇心,所以我就开始码字写程序。没过多久,我就在自己的框架中对HTTP协议有了粗略的实现,我能够使用一些handlers在服务端写程序,而这些handlers实现了原本Servlet的功能,但却继承自HttpHandler.
为了使Comet的push功能顺利进行,需要client先在server上登陆并注册自己。在下面的示例程序中,我并没有从db中检查登录验证(在真实的应用场景中需要从db中验证登录用户的合法性),但引入了channel的概念,每一个client可以订阅该channel。在登录过程中,client告知server需要订阅的channel,server把client的non-blocking connection加到其订阅模型中。服务端响应使用基于块的传输编码(chunked transfer encoding),因为这种方式,连接一直是打开的,而你不需要在响应中先声明有多少数据会发回到客户端。后面某个时间,有事件发生,server可以这个一直打开着的连接,并把新发生的事件通过一个新的数据块推送给订阅者client。
server端实现并非难事,但在client端却遇到了点问题,直到我意识到在ajax响应中,数据返回时间的响应状态是3,而非更常用的4。ajax的onreadystatechange回调函数把返回的每一个字节都放到了responseText里,而不是最新返回的一块数据,所以我也开始变得烦躁,直到我找到了办法能够让浏览器只把新返回的一块数据追加到页面上div的responseHTML属性里。不管怎样,经过数小时的努力,我终于有了个相当不错的小应用。但还有些问题,一方面是因为server实现比较粗略,特别是对于client断开连接的处理,例如浏览页面关闭,连接被client断开了。最终我不得不在自己的框架里实现HTTP协议,这看起来是在"重造*",因为在servlet已经做了这些事情,并且比自己做的要好,我不喜欢node.js的一个原因就是你不得不重做这些事情,而java servlet中这都是现在的。
像上面提到的参考文章中所说,servlet 3.0能够实现Commet。我下载了支持servlet3.0的tomcat7.0,并把自己的应用移植到合适的servlet上。因为没有文档对servlet3.0中的异步特性做详实的介绍,所以能恰当的使用他们还是需要费些功夫。JSR 315在这一点上功不可没。最终当我对异步servlet了如指掌后,对于服务端push需求我就有了最最让人满意的方案。
首先要做的就是重新配置tomcat的connector协议,使之以non-blocking的方式处理连接,这样我就可以保持client的连接以便于给client推送数据。同时不能依赖一个请求一个线程的处理模式,因为线程的频繁切换以及线程的内存需求将会成为系统性能的瓶颈。connector协议的配置在server.xml中,使用如下的配置替代传统的配置方式:
-
non-blocking配置
<!-- NIO HTTP/1.1 connector -->
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
redirectPort="8443" /> -
传统配置
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
如你所见,在non-blocking配置中使用了"org.apache.coyote.http11.Http11NioProtocol"替代了"HTTP/1.1"。通过此配置调整Tomcat就会以non-blocking的模式运行。
然后,我们创建两个servlets。第一个LoginServlet用于处理client的登陆并订阅消息主题(channel),示例如下:
package ch.maxant.blog.nio.servlet3; import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map; import javax.servlet.AsyncContext;
import javax.servlet.ServletContext;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse; import ch.maxant.blog.nio.servlet3.model.Subscriber; @WebServlet(name = "loginServlet", urlPatterns = { "/login" }, asyncSupported = true)
public class LoginServlet extends HttpServlet { public static final String CLIENTS = "ch.maxant.blog.nio.servlet3.clients"; private static final long serialVersionUID = 1L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { // dont set the content length in the response, and we will end up with chunked
// encoding so that a) we can keep the connection open to the client, and b) send
// updates to the client as chunks. // *********************
// we use asyncSupported=true on the annotation for two reasons. first of all,
// it means the connection to the client isn't closed by the container. second
// it means that we can pass the asyncContext to another thread (eg the publisher)
// which can then send data back to that open connection.
// so that we dont require a thread per client, we also use NIO, configured in the
// connector of our app server (eg tomcat)
// ********************* // what channel does the user want to subscribe to?
// for production we would need to check authorisations here!
String channel = request.getParameter("channel"); // ok, get an async context which we can pass to another thread
final AsyncContext aCtx = request.startAsync(request, response); // a little longer than default, to give us time to test.
// TODO if we use a heartbeat, then time that to pulse at a similar rate
aCtx.setTimeout(20000L); // create a data object for this new subscription
Subscriber subscriber = new Subscriber(aCtx, channel); // get the application scope so that we can add our data to the model
ServletContext appScope = request.getServletContext(); // fetch the model from the app scope
@SuppressWarnings("unchecked")
Map<String, List<Subscriber>> clients = (Map<String, List<Subscriber>>) appScope.getAttribute(CLIENTS); // add a listener so we can remove the subscriber from the model in
// case of completion or errors or timeouts
aCtx.addListener(new AsyncListener("login", clients, channel, subscriber)); // *********************
// now add this subscriber to the list of subscribers per channel.
// ********************* // restrict access to other sessions momentarily, so that this app
// scoped model has the channel only put into it only one time
synchronized (clients) {
List<Subscriber> subscribers = clients.get(channel);
if(subscribers == null){
// first subscriber to this channel!
subscribers = Collections.synchronizedList(new ArrayList<Subscriber>());
clients.put(channel, subscribers);
} // add out data object to the model
subscribers.add(subscriber);
} // acknowledge the subscription
aCtx.getResponse().getOutputStream().print("hello - you are subscribed to " + channel);
aCtx.getResponse().flushBuffer(); //to ensure the client gets this ack NOW
}
}
代码中的注释解释了我的思路,我在context上添加了listener,以使我们能感知到client的断开,并对连接模型做相应清理。示例程序的完整代码放到文章结尾以供下载。值得注意的是上面的servlet自身并不以异步形式处理数据,相反它只把request和response对象加以处理,好让后面的程序在需要时能访问得到。我把request和response放到了一个具有application作用域(application scope)的对象模型上。当有servlet接收到请求并往指定channel中发布数据时,我们可以从对象模型中把订阅该channel的client的request和response取出,并往client端推送数据。
package ch.maxant.blog.nio.servlet3; import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map; import javax.servlet.AsyncContext;
import javax.servlet.ServletContext;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse; import ch.maxant.blog.nio.servlet3.model.Subscriber; @WebServlet(name = "publishServlet", urlPatterns = { "/publish" }, asyncSupported = true)
public class PublishServlet extends HttpServlet { private static final long serialVersionUID = 1L; public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { // *************************
// this servlet simply spawns a thread to send its message to all subscribers.
// this servlet keeps the connection to its client open long enough to tell it
// that it has published to all subscribers.
// ************************* // add a pipe character, so that the client knows from where the newest model has started.
// if messages are published really quick, its possible that the client gets two at
// once, and we dont want it to be confused! these messages also arrive at the
// ajax client in readyState 3, where the responseText contains everything since login,
// rather than just the latest chunk. so, the client needs a way to work out the
// latest part of the message, containing the newest version of the model it should
// work with. might be better to return XML or JSON here!
final String msg = "|" + request.getParameter("message") + " " + new Date(); // to which channel should it publish? in prod, we would check authorisations here too!
final String channel = request.getParameter("channel"); // get the application scoped model, and copy the list of subscribers, so that the
// long running task of publishing doesnt interfere with new logins
ServletContext appScope = request.getServletContext();
@SuppressWarnings("unchecked")
final Map<String, List<Subscriber>> clients = (Map<String, List<Subscriber>>) appScope.getAttribute(LoginServlet.CLIENTS);
final List<Subscriber> subscribers = new ArrayList<Subscriber>(clients.get(channel)); // we are going to hand the longer running work off to a new thread,
// using the container and the async support it provides.
final AsyncContext publisherAsyncCtx = request.startAsync(); // aknowledge to the caller that we are starting to publish...
response.getWriter().write("<html><body>Started publishing<br>");
response.flushBuffer(); //tell the publisher NOW // here is the logic for publishing - it will be passed to the container for execution sometime in the future
Runnable r = new Runnable(){
@Override
public void run() {
System.out.println("starting publish of " + msg + " to channel " + channel + " for " + subscribers.size() + " subscribers...");
long start = System.currentTimeMillis(); //keep a list of failed subscribers so we can remove them at the end
List<Subscriber> toRemove = new ArrayList<Subscriber>();
for(Subscriber s : subscribers){
synchronized (s) {
AsyncContext aCtx = s.getaCtx();
try {
aCtx.getResponse().getOutputStream().print(msg);
aCtx.getResponse().flushBuffer(); //so the client gets it NOW
} catch (Exception e) {
System.err.println("failed to send to client - removing from list of subscribers on this channel");
toRemove.add(s);
}
}
} // remove the failed subscribers from the model in app scope, not our copy of them
synchronized (clients) {
clients.get(channel).removeAll(toRemove);
} //log success
long ms = System.currentTimeMillis() - start;
String ok = "finished publish of " + msg + " to channel " + channel + " in " + ms + " ms.";
System.out.println(ok); //aknowledge to the publishing client that we have finished.
try {
publisherAsyncCtx.getResponse().getWriter().write(ok + "</body></html>");
publisherAsyncCtx.complete(); //we are done, the connection can be closed now
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}; //start the async processing (using a pool controlled by the container)
publisherAsyncCtx.start(r);
}
}
同样上面的代码也加了详细的注释。在这个servlet中,我们做了一些会长时间执行的任务。然而在很多支持Servlet3.0异步特性的线上示例中,他们会把长时间执行的任务交给Executor来处理。通过start(Runnable)方法,这个异步执行器为长时间执行的任务提供了理想的解决方案。程序的开发者无需再为线程调度烦恼——在像WebSphere等应用服务器上,开发者是不允许自己调度线程的——而是由具体的容器来决定如何处理任务。JavaEE的真谛是让程序开发者更多的关注业务代码而非技术细节本身。
上面代码中还有一个重要的地方在于数据发布是由不同的线程来完成的。设想一下,如果要在1秒内给10000个client推送数据,由一个线程来完成的话,每次数据推送将不能超过1/10毫秒,也就意味着做一些常见的数据库查询操作都将变得不可能。而对于non-blocking服务器来说,我们用不了1秒钟来做同样的事情,甚至在某些场景下1秒钟将是何其长的等待。那么将任务分给不同的线程处理将变得尤其重要,但不幸的是,在node.js中,尽管你能把任务分给其他的处理器来处理,但事实上node.js并不能并行处理你的任务,这可能比看起来要糟糕很多。
现在,我们只需要创建一个client程序来订阅到server上。client就是一个ajax请求,在HTML页面加载时创建,这里用到了我自己写的一个javascript库。HTML代码如下:
<html>
<head>
<script language="Javascript" type="text/javascript" src="push_client.js"></script>
</head>
<body>
<p>Boo!</p>
<div id="myDiv"></div>
</body>
<script language="Javascript" type="text/javascript"> function callback(model){
//simply append the model to a div, for demo purposes
var myDiv = document.getElementById("myDiv");
myDiv.innerHTML = myDiv.innerHTML + "<br>" + model;
} new PushClient("myChannel", callback).login(); </script>
</html>
如你所见,client只需定义一个callback函数,用于处理server发布的数据——可以是text文本、xml或者JSON格式,选择权由发布者决定。JavaScript库中的程序看起来可能会有些复杂,但本质就是创建一个XMLHttpRequest对象用于给login servlet发送请求,然后解析从server接收的数据,并把最新的部分传递给callback函数。
function PushClient(ch, m){ this.channel = ch;
this.ajax = getAjaxClient();
this.onMessage = m; // stick a reference to "this" into the ajax client, so that the handleMessage
// function can access the push client - its "this" is an XMLHttpRequest object
// rather than the push client, coz thats how javascript works!
this.ajax.pushClient = this; function getAjaxClient(){
/*
* Gets the ajax client
* http://en.wikipedia.org/wiki/XMLHttpRequest
* http://www.w3.org/TR/XMLHttpRequest/#responsetext
*/
var client = null;
try{
// Firefox, Opera 8.0+, Safari
client = new XMLHttpRequest();
}catch (e){
// Internet Explorer
try{
client = new ActiveXObject("Msxml2.XMLHTTP");
}catch (e){
client = new ActiveXObject("Microsoft.XMLHTTP");
}
}
return client;
}; /**
* pass in a callback and a channel.
* the callback should take a string,
* which is the latest version of the model
*/
PushClient.prototype.login = function(){ try{
var params = escape("channel") + "=" + escape(this.channel);
var url = "login?" + params;
this.ajax.onreadystatechange = handleMessage;
this.ajax.open("GET",url,true); //true means async, which is the safest way to do it // hint to the browser and server, that we are doing something long running
// initial tests only seemed to work with this - dont know, perhaps now it
// works without it?
this.ajax.setRequestHeader("Connection", "Keep-Alive");
this.ajax.setRequestHeader("Keep-Alive", "timeout=999, max=99");
this.ajax.setRequestHeader("Transfer-Encoding", "chunked"); //send the GET request to the server
this.ajax.send(null);
}catch(e){
alert(e);
}
}; function handleMessage() {
//states are:
// 0 (Uninitialized) The object has been created, but not initialized (the open method has not been called).
// 1 (Open) The object has been created, but the send method has not been called.
// 2 (Sent) The send method has been called. responseText is not available. responseBody is not available.
// 3 (Receiving) Some data has been received. responseText is not available. responseBody is not available.
// 4 (Loaded)
try{
if(this.readyState == 0){
//this.pushClient.onMessage("0/-/-");
}else if (this.readyState == 1){
//this.pushClient.onMessage("1/-/-");
}else if (this.readyState == 2){
//this.pushClient.onMessage("2/-/-");
}else if (this.readyState == 3){
//for chunked encoding, we get the newest version of the entire response here,
//rather than in readyState 4, which is more usual.
if (this.status == 200){
this.pushClient.onMessage("3/200/" + this.responseText.substring(this.responseText.lastIndexOf("|")));
}else{
this.pushClient.onMessage("3/" + this.status + "/-");
}
}else if (this.readyState == 4){
if (this.status == 200){ //the connection is now closed. this.pushClient.onMessage("4/200/" + this.responseText.substring(this.responseText.lastIndexOf("|"))); //start again - we were just disconnected!
this.pushClient.login(); }else{
this.pushClient.onMessage("4/" + this.status + "/-");
}
}
}catch(e){
alert(e);
}
};
}
这里我们需要监听ready state值为3和4的状态,而非仅仅是通常的状态4。当connection打开后,client就只在ready state为3时接收数据,并且每次接收一块(chuck),但ajax.responseText属性里则包含了自登录后接收到的所有数据而非最新的一部分。如果connection接收了1T数据,那事情将变得糟糕——事实上浏览器将内存溢出而崩溃。所以你需要在server端统计发送到client的数据量,一旦超过阀值,就强制client断开(发送数据后,在异步上下文中调用相应client的complete()方法来结束连接)。client应该在断开连接后自动再次连接到server上,以接收新的数据。
为了防止client端连接超时断开(对于long polling来说,超时比较常见),我们可以增加心跳机制(heartbeat)——由server定时每个client发送心跳请求。一般来说心跳的时间间隔要小于timeout的超时时间。但具体实施起来又有很多权衡,是只给有需要的client每隔一秒就发送一次心跳还是给所有的client每隔25秒发送一次心跳——假如超时时间是30秒。相对上面的"server断开连接,client自动重连",你需要考量哪种方案具有更好的性能。另外频繁的连接状态检测和数据推送失败异常,心跳机制也很容易检测已断开的连接,并从server中剔除。当然如果有连接断开,容器也能帮我们通知channel上的listener,所以也许你对心跳机制的依赖并没那么强烈。但你可以通过衡量做出最终选择。:)
好了,现在看看怎么来发送数据,其实很简单,在浏览器中输入如下URL并回车就可以通过GET方式给publish servlet发送一次请求:
http://localhost:8080/nio-servlet3/publish?channel=myChannel&message=javaIsAwesome
多打开几个窗口,当在一个窗口中刷新页面时,在其他窗口就能几乎实时获得数据推送,并把最新消息显示在页面下方。我在测试时,使用Firefox作为消息订阅者,使用了Chrome来发送数据。
上面的代码实现有个假定的前提,即Tomcat的NIO connector已经过良好的测试,并具有很好的表现,所以我并未考察扩展性(scalability)方面的东西。后续可能会验证Servlet3.0作为推送场景解决方案的扩展性。本文章主要阐述了使用JavaEE Servlet实现Comet推送技术的简单性。尽管以前Jetty和Tomcat等服务容器也提供了Comet接口,并且实现起来也比较容易,但现在通过Servlet3.0,我们有了更标准的实现方案。
就本文所述问题,还有很多专业的开源解决方案可供选择。像这篇文章列出了很多技术方案,其中最为古怪的要数APE——又一个把JavaScript用于server服务的项目!?希望能比PHP做的好些:)
完整代码点击下载
译者注: 文章来源:http://blog.maxant.co.uk/pebble/2011/06/05/1307299200000.html