I am using SwiftMQ 6.1.0 with a JDBC Store Swiftlet using MySQL db on the same server. I have a consumer which consumes messages from the queue. When there is an error, the consumer will create a message schedule that will send the message back into the original queue after 5 mins. At busy times, the consumer has a failure rate of around 10% but the consumer can process almost 10 messages per second with this failure rate. With the high failure rate and large volume, the queue will jam up over time with about 5000-7000 messages in sys$scheduler and around 500-1000 message in swiftscheduler all scheduled to be returned back into the queue after 5 mins.
The problem is once all the mayhem is over that is the producers have stopped, the sys$scheduler process the schedules at an alarmingly slow rate, around 1 messages per 2 seconds. It took almost 2 hours to clear the scheduled messages of 7000. I find that this happens whenever the numbers are high. At around 300+ messages, sys$scheduler is pretty fast, clearing almost in a couple of mins.
My questions are:
1) Is there a way for the scheduler to perform faster and more accurately? 2hrs is way late for a schedule supposed to happen 5 mins ago.
2) Is it that the scheduler is not optimized for JDBC store?
3) Is there any settings that I may have missed out?
I guess the problem is the huge amount of message schedules. They are stored in queue sys$scheduler and picked up there from the Scheduler Swiftlet when they are ready to run. The "pick up" goes via a message selector (id = <id>). So if you have thousands of message schedules, all ready at nearly the same time, you'll have this number of message jobs, selecting a message out of sys$scheduler by its id.
My recommendation is to reduce the number of threads for pool "scheduler.job". By default it has no limit so a thread is created for each concurrent message job. Set this pool to:
The messages in sys$scheduler will be completely in memory (but also in the database) and selection is fast. This change requires a reboot.
b) If you messages are very large, leave the cache setting at its default and consider to send the scheduled message as non-persistent. If the queue grows beyond the default cache size, messages are swapped to disk (not to the database) in quite fast random access swap files. However, since the scheduled messages are non-persistent, they don't survive a reboot.
Done and tested. I have
1) <pool name="scheduler.job" kernel-pool="true" <b>max-threads="1">
2) <queue-controller name="02" predicate="sys$%" cache-size="10000"/> 3) Also increase the Memory Size "-Xmx1024M". This is a 2.8GHZ/4GB RAM Dell Server running RedHat Enterprise
I can't check it today but I guess these jobs are expired before they run because you have only 1 thread to run the jobs (before that, all were fired in a separate thread but were all waiting on the sys$scheduler queue). Increase the pool to 10 and see if that works.