the set up I am testing is this.
The same application running on 3 nodes, for redundancy, and each node has a single SwiftMq instance running, and all application instances are connected to each swiftMQ instance , v5.0 in this case.
The applications are both the producer and consumer for the queues, and they use them in a round robin.
The applications have 2 producers, and 10 consumers each.
What I see happen, is that when the queues reach a level, ~2000 messages/queue, the performance drops dramatically. i.e. more than 50%.
This then causes the queues to grow in size even quicker, and can cause a severe backlog of messages.
This can then take a long time to clear resulting in large latency issues.
This does not happen when each application instance only connects to 1 swiftMQ instance, but in this case there is no redundancy.
Is there any configuration changes that can be made to enable this redundancy set-up to perform at a higher rate?
Decrease in throughput when the queue backlog grows is usually caused by exhaust of the store cache. Have a look here. You may also want to increase the size of the queue caches. This is 500 messages by default (attribute "cache-size" per queue).
I have tried tuning the store, and I have seen a small improvement.
Maybe that is all the machine is capable of running.
However, I still have issues with locked messages in queues that never get read by the consumer.
The setup I have is an application which uses SwiftMq as an internal buffer/queue, i.e. the application is both producer and consumer.
The installation has 3 instances of the application, and 3 SwiftMq instances, with one application and queue per machine, all running on the same sub-net.
Each application instance is connected to each SwiftMq instance for load sharing and redundancy, and the application uses each queue instance in a round robin.
The performance drops significantly in this configuration. Running 1 application instance with 1 SwiftMQ is actually faster than 3 applications with 3 SwiftMq, but then there is no redundancy.
I have run this setup with the same results with SwiftMq9.3 (eval copy).
Can you explain the reason for this?
Maybe this is not the correct way to use SwiftMq?
Maybe I need a different product/flavor from the standard router?
I guess that it is an application issue, but I can't see where the problem is.
The application is calling receive with a 5000 millisecond timeout, but if there is a message in the queue, none of the consumers in any of the application instances can get the message.
How is the actually message locked in the queue?, and is it possible to release it?
Did you test it with 9.x router and 9.x client? Can you reproduce it with a sample? I'm not aware of any issues with SwiftMQ in this regard. This is a common function used everywhere. May be this is an issue with the old 5.x release but I'm not able check that.