SQS
- Decouple applications
- unlimited throughput, unlimited number of messages in queue
- 4 days retention of message, max 14 days
- 256KB size per message
- can have dulicate message (at least once delivery, osscasionally)
- can have out of order message (best effort ordering)
- API: SendMessage
Cusming Messages
- Poll SQS for meesage (receive upto 10 messages at a time)
- API: DeleteMessage after processed the message
Multi EC2 instance consumers + Auto Sscaling Group (ASG)
If you have lots of Queue message, we can scale consumers horizontally to imporve throughput of processing.
We can use CloudWatch Metric to check `ApproximateNumberOfMessages` in the queue, if it is larger than a number, will fire an Alarm by CloudWatch Alarm, it will trigger Consumers' ASG to increase the numbers of EC2 instance to consume the message.
Security
- In-flight: HTTPS
- At-rest: KMS
- Client-side: encrypt/decrypt by yourself
- IAM needed for accessing SQS API
- SQS access policies: cross-account access / allow other service (S3, SNS) to write to SQS
SQS Queue Access Policy
In first case, we want to allow an EC2 instance in another AWS account to poll the message from the queue. It wirtes `Principal` of another AWS account id, allow to perform `action` of `sqs:ReceiveMessage` and queue which is itself in the `Resouce`.
Second case, we want to allow another AWS service to `sqs:sendMessage` to the queue. Condition should match S3's arn and S3's own account id; Resouce should be the queue itslef.
Visibility Timeout
- Default timeout is 30 seconds
- If task it takes longer than 30 second, other consumer will be able to process this message again. Solution is to increase the timeout, for example 2 mins.
- API: ChangeMessageVisibililty; If consumer knows that the task will take longer than 30 second, it should call this api to increase the Visbility Timeout
Dead Letter Queue
- If message was not successfully procesed by the queue, it will send back to the queue
- Then consumer will pocess again...and again send by to the queue, this will cause problem
- So if the message has failed few number of times, we want to send it to Dead Letter Queue
- Mainly for debuggin!
- Set retention day to 14 days, so that you have enough time to process it
Delay Queue
- Default delay is 0 second
- Up to 15 mins
- API: DelaySeconds parameter
WaitTimeSeconds can esaily be set in console
1-20 seconds for long polling
If your message is larger than 256KB, use Extended Client, which will send your mesage to S3 and metadata to SQS, then consumer can pull the data from S3.
The same partten used by DynamoDB, the limit in DyanmoDB is 400kb.
ReceiveMessageWaitTimeSeconds by default is 0: short polling, if we set to 1-20 seconds, then it is longer polling.
The name has to be `<name>.fifo`, if you don't end with `.fifo`, it will not be a FIFO queue.
SQS FIFO - Deduplication
FIFO can do `content hashing`, for example if you send `hello world` content twice, the first time content will be hashed. Second time, becaseu the content is the same, it will be ingored. Or you can provide `MessageDeDuplicationID`, if two times, the id is same, then second will be ignored.
- De-duplication interval is 5 minues
Message Grouping
- GroupId is ideal for parallel procesing! Becaseu one consumer will only processing one groupId
- And all the items in the same groupid are in order
- But each group's order is not guaranteed! it can be A-B-C or B-A-C ....
SNS
Fan Out partten
Why has to ues Fan Out?
If one event source only trigger one action.. then we don't need Fan out
But If one event source trigger multi actions..then we need Fan out.
For example, S3 Object created event want to trigger two Queue and one Lambda, we need to use Fan out.
Message Filting
For the same topic, we contain lots of information which multi subscribers only interested in part of it.
Then we can apply filtering in SNS
Protocol