Configuring the Push Notifications service for high availability
Push Notifications
service for high availabilityHigh availability for the
Push Notifications
service is based on clustering. The Push Notifications
service supports high availability by adding additional servers running Push Notifications
. The BEMS
instances that host the Push Notifications
services that you designate to participate in high availability must share the same database. If a BEMS
instance is unavailable, other instances in the high availability environment perform a check approximately every minute to verify whether all of the instances are available. If a BEMS
instance is offline, users are distributed among the available instances. Consider the following scenario:Your
BEMS
environment is configured for high availability and includes four BEMS
instances which support 10000 users. BEMS
_name1 is taken offline for maintenance. The other BEMS
instances routinely perform a search of available BEMS
.
- If theBEMSinstance is available, the log files display the instance with a state of GOOD:<YYYY-MM-DD>T14:16:59.385-0500 CEF:1 | pushnotify-ha-dbwatcher | pushnotify-ha-dbwatcher | 0.13.21 | INFO | unknown | 5 | ID=297 THR=DbWatcher-0 CAT=ProducerTasksRunner MSG=Worker BEMS_name1 is in state GOOD with 1/10000 users (0.01% capacity). Last status was updated at "<YYYY-MM-DD> T19:16:59.359 UTC". FeatureSet:AgingStaleUser, RichPush, VIPNotification, apnsPayload2k, badgeCount, subFolderNotification, pushSettings, smimeCertificateLookup, soundSettings, badgeCount2, autodiscover, notificationsSettings, localizedPush, delayWriteSyncState, RightToDisconnect, FCMRelayService updated at "1532523850857"
- If theBEMSinstance is unavailable, the log files display the instance with a state of BAD and users are distributed as required. In the following log example, twoBEMSinstances,BEMS_name1 andBEMS_name2, are checked and theBEMS_name1 instance that is unavailable is flagged as BAD.<YYYY-MM-DD>T14:42:33.874+0100 CEF:1 | pushnotify-ha-comm | pushnotify-ha-comm | 0.15.3 | INFO | unknown | 5 | ID=309 THR=DbWatcher-0 CAT=HaProducerImpl MSG=BAD!! Last known status of HaWorker "BEMS_name1" is "<YYYY-MM-DD>T10:45:47.831 UTC". It is before cut-off time "<YYYY-MM-DD> T13:37:33.860 UTC" <YYYY-MM-DD>T14:42:33.874+0100 CEF:1 | pushnotify-ha-dbwatcher | pushnotify-ha-dbwatcher | 0.15.3 | INFO | unknown | 5 | ID=310 THR=DbWatcher-0 CAT=ProducerTasksRunner MSG=Got status of 2 workers <YYYY-MM-DD>T14:42:33.874+0100 CEF:1 | pushnotify-ha-dbwatcher | pushnotify-ha-dbwatcher | 0.15.3 | INFO | unknown | 5 | ID=310 THR=DbWatcher-0 CAT=ProducerTasksRunner MSG=Worker BEMS_name2 is in state GOOD with 359/10000 users (3.59% capacity). Last status was updated at "<YYYY-MM-DD> T13:42:33.693 UTC". FeatureSet:AgingStaleUser, RichPush, VIPNotification, apnsPayload2k, badgeCount, subFolderNotification, pushSettings, smimeCertificateLookup, soundSettings, badgeCount2, autodiscover, notificationsSettings, localizedPush, delayWriteSyncState, RightToDisconnect, FCMRelayService, Delegate updated at "1545046557729" <YYYY-MM-DD>T14:42:33.875+0100 CEF:1 | pushnotify-ha-dbwatcher | pushnotify-ha-dbwatcher | 0.15.3 | INFO | unknown | 5 | ID=310 THR=DbWatcher-0 CAT=ProducerTasksRunner MSG=Worker BEMS_name2 is idle 359/10000 (3.59% capacity) <YYYY-MM-DD>T14:42:33.875+0100 CEF:1 | pushnotify-ha-dbwatcher | pushnotify-ha-dbwatcher | 0.15.3 | INFO | unknown | 5 | ID=310 THR=DbWatcher-0 CAT=ProducerTasksRunner MSG=Worker BEMS_name1 is in state BAD with 0 users. Last status was updated at "<YYYY-MM-DD> T10:45:47.831 UTC"
When you configure the
Push Notifications
service for high availability, you complete the following actions:- During the installation of additionalPush Notificationsservice instances, on the Database Information screen specify the same database for each instance. For example,BEMS-Core.
- Configure theBlackBerry Workconnection settings. For instructions, see "Configure BlackBerry Work connection settings" in the BlackBerry Work, Notes, and Tasks Administration content. If you have theMailservice installed on multiple computers, repeat this step for each computer that hosts the service.