【Todo】AMQP示例学习

这个网站非常好:

http://www.rabbitmq.com/getstarted.html

把AMQP的各种用法都讲了,还带上了各种语言:

【Todo】AMQP示例学习

【Todo】AMQP示例学习

第一篇 http://www.rabbitmq.com/tutorials/tutorial-one-python.html

看到了no_ack=true,那么就不用调用 basic_ack函数了,下面有讲。

Putting it all together

Full code for send.py:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env python
import pika connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel() channel.queue_declare(queue='hello') channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

(send.py source)

Full receive.py code:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/usr/bin/env python
import pika connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel() channel.queue_declare(queue='hello') def callback(ch, method, properties, body):
print(" [x] Received %r" % body) channel.basic_consume(callback,
queue='hello',
no_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

(receive.py source)

Now we can try out our programs in a terminal. First, let's send a message using our send.pyprogram:

 $ python send.py
[x] Sent 'Hello World!'

The producer program send.py will stop after every run. Let's receive it:

 $ python receive.py
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'Hello World!'

Hurray! We were able to send our first message through RabbitMQ. As you might have noticed, the receive.py program doesn't exit. It will stay ready to receive further messages, and may be interrupted with Ctrl-C.

Try to run send.py again in a new terminal.

We've learned how to send and receive a message from a named queue. It's time to move on to part 2 and build a simple work queue.

第二篇  http://www.rabbitmq.com/tutorials/tutorial-two-python.html

讲了之前说的ack机制,以及queue和message的持久化。

Round-robin dispatching

One of the advantages of using a Task Queue is the ability to easily parallelise work. If we are building up a backlog of work, we can just add more workers and that way, scale easily.

First, let's try to run two worker.py scripts at the same time. They will both get messages from the queue, but how exactly? Let's see.

You need three consoles open. Two will run the worker.py script. These consoles will be our two consumers - C1 and C2.

shell1$ python worker.py
[*] Waiting for messages. To exit press CTRL+C
 
shell2$ python worker.py
[*] Waiting for messages. To exit press CTRL+C

In the third one we'll publish new tasks. Once you've started the consumers you can publish a few messages:

shell3$ python new_task.py First message.
shell3$ python new_task.py Second message..
shell3$ python new_task.py Third message...
shell3$ python new_task.py Fourth message....
shell3$ python new_task.py Fifth message.....

Let's see what is delivered to our workers:

shell1$ python worker.py
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'First message.'
[x] Received 'Third message...'
[x] Received 'Fifth message.....'
 
shell2$ python worker.py
[*] Waiting for messages. To exit press CTRL+C
[x] Received 'Second message..'
[x] Received 'Fourth message....'

By default, RabbitMQ will send each message to the next consumer, in sequence. On average every consumer will get the same number of messages. This way of distributing messages is called round-robin. Try this out with three or more workers.

第三篇 http://www.rabbitmq.com/tutorials/tutorial-three-python.html

In the previous tutorial we created a work queue. The assumption behind a work queue is that each task is delivered to exactly one worker. In this part we'll do something completely different -- we'll deliver a message to multiple consumers. This pattern is known as "publish/subscribe".

The core idea in the messaging model in RabbitMQ is that the producer never sends any messages directly to a queue. Actually, quite often the producer doesn't even know if a message will be delivered to any queue at all.

There are a few exchange types available: direct, topic, headers and fanout. We'll focus on the last one -- the fanout. Let's create an exchange of that type, and call it logs:

channel.exchange_declare(exchange='logs',
type='fanout')

To find out how to listen for a subset of messages, let's move on to tutorial 4

第四篇 http://www.rabbitmq.com/tutorials/tutorial-four-python.html

We were using a fanout exchange, which doesn't give us too much flexibility - it's only capable of mindless broadcasting.

We will use a direct exchange instead. The routing algorithm behind a direct exchange is simple - a message goes to the queues whose binding key exactly matches the routing key of the message.

【Todo】AMQP示例学习

It is perfectly legal to bind multiple queues with the same binding key. In our example we could add a binding between X and Q1 with binding key black. In that case, the direct exchange will behave like fanout and will broadcast the message to all the matching queues. A message with routing key black will be delivered to both Q1 and Q2.

但这个direct routing解决不了这个问题:it can't do routing based on multiple criteria.

第五篇  http://www.rabbitmq.com/tutorials/tutorial-five-python.html

Topic exchange

Messages sent to a topic exchange can't have an arbitrary routing_key - it must be a list of words, delimited by dots.

A few valid routing key examples: "stock.usd.nyse", "nyse.vmw", "quick.orange.rabbit". There can be as many words in the routing key as you like, up to the limit of 255 bytes.

    • * (star) can substitute for exactly one word.
    • # (hash) can substitute for zero or more words.

Topic exchange

Topic exchange is powerful and can behave like other exchanges.

When a queue is bound with "#" (hash) binding key - it will receive all the messages, regardless of the routing key - like in fanout exchange.

When special characters "*" (star) and "#" (hash) aren't used in bindings, the topic exchange will behave just like a direct one.

第六篇 http://www.rabbitmq.com/tutorials/tutorial-six-python.html

RPC

Callback queue

In general doing RPC over RabbitMQ is easy. A client sends a request message and a server replies with a response message. In order to receive a response the client needs to send a 'callback' queue address with the request. Let's try it:

result = channel.queue_declare(exclusive=True)
callback_queue = result.method.queue channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(
reply_to = callback_queue,
),
body=request) # ... and some code to read a response message from the callback_queue ...

Message properties

The AMQP protocol predefines a set of 14 properties that go with a message. Most of the properties are rarely used, with the exception of the following:

  • delivery_mode: Marks a message as persistent (with a value of 2) or transient (any other value). You may remember this property from the second tutorial.
  • content_type: Used to describe the mime-type of the encoding. For example for the often used JSON encoding it is a good practice to set this property to: application/json.
  • reply_to: Commonly used to name a callback queue.
  • correlation_id: Useful to correlate RPC responses with requests.

The server code is rather straightforward:

  • (4) As usual we start by establishing the connection and declaring the queue.
  • (11) We declare our fibonacci function. It assumes only valid positive integer input. (Don't expect this one to work for big numbers, it's probably the slowest recursive implementation possible).
  • (19) We declare a callback for basic_consume, the core of the RPC server. It's executed when the request is received. It does the work and sends the response back.
  • (32) We might want to run more than one server process. In order to spread the load equally over multiple servers we need to set the prefetch_count setting.

注意,如果用的是默认的exchange,那么不需要bind exchange和queue,见第一章、第二章等。

注意rpc client等待回调地方的处理:

while self.response is None:
self.connection.process_data_events()
return int(self.response)
  • (32) At this point we can sit back and wait until the proper response arrives.
  • (33) And finally we return the response back to the user.

(完)

上一篇:topcoder srm 711 div1 -3


下一篇:HIVE的sql语句操作