History Database Tool

From the xCALLY 2.4.15 version you can use the History Database Tool to automatically move your old data from the XCALLY database to a new XCALLY History database

We recommend to use the XCALLY SHUTTLE GUI to manage short term reports, so to have high performances when you manage the query on the Shuttle Analytics section. The mid-long term reports can be stored on a separate DB (called xcally_history), so to be managed with external business intelligence or reporting tools.

The following instructions explain how to replicate the long term information on another DB (used with separate tools for long term report analysis) from the short term operational XCALLY DB. The involved tables are: agent_logqueue_logcall_logcdr and call_history_log (the last one only if you are using the Tiger Dial module).
 

In this way you will have better performances on Shuttle (on analytics, queries, searches...) optimizing the xcally database and avoiding its excessive growth.

 

Quick Guide

Requirements
1. java7 (if not installed, execute yum install java7)
2. node.js (if not installed, execute yum install node)

IMPORTANT:
- run this tool ONLY when the services are stopped.
- enable the root mysql access of the remote machine during the installation.
 Edit the mysql.ini file in this way: under section mysqld add bind-address: 0.0.0.0 if not present. After that restart mysqld service. 


Download
Here you can find the History Database Tool installer package:
https://provisioning.xcally.com/files/packages/xcally-history-1.0.jar

You have to login as a root user and execute the following commands:

yum install wget
wget --no-check-certificate https://provisioning.xcally.com/files/packages/xcally-history-1.0.jar
java -jar xcally-history-1.0.jar -console


Then you will see the following guided steps:

1) Specify the mysql host, port and root password of the xcally source database (xcally)

2) Specify the mysql host, port and root password of the xcally destination database (xcally_history)

3) Set the number of days that will be maintained in the source database as 'recent data'. Default value: 180. Minimum value: 100. These 'recent data' are available on Shuttle and in particular into the Analytics section in order to create short term reports. The data older than the specified days-time-interval will be copied or moved (see step 4) into the destination database xcally_history.

4) Specify if you want to delete the 'old data' from the source xcally database. Default: Yes.
The data contained into the source database are deleted only if copied with success on the destination database xcally_history.
Important: deleting the old data will improve the Shuttle performances but note that you won't be able to access to these old data through Shuttle, because the data will be available ONLY on the destination database xcally_history, so to be managed with external business intelligence or reporting tools as said before. 

i.e. if you set 180 days (on the step 3) and that you want to delete the 'old data' (step 4), it means that the data related to the last 180 days will be maintained into the source database, while the data older than 180 days are deleted (and available only into the destination database).

The app is installed under the directory /usr/local/xcally/history
By default, the History Database Tool will run every day at 00:00. 

If you want to change the time of running please edit your crontab file (crontab -e).


Advanced Settings

Please, if you want to execute this command manually, run the script:
 

node repl.js etc/configuration.json
 

or add new cronjob:

0 0 * * * node repl.js etc/configuration.json


Here an example of the configuration file:

{
   "source" : {
      "dialect": "mysql",
        "max" : 3,
        "min" : 0,
        "idle" : 10000,
      "host": "127.0.0.1",
      "port": 3306,
      "database": "xcally",
      "primary_key": "id",
      "id_source": "id_source",
      "user": "user",
      "password": "password",
      "model": "/models/call_log",
      "logging": false
   },
   
   "destination" : {
      "dialect": "mysql",
        "max" : 3,
        "min" : 0,
        "idle" : 10000,
      "host": "127.0.0.1",
      "port": 3306,
      "database": "xcally_logs",
      "primary_key": "id",
      "id_source": "id_source",
      "user": "user",
      "password": "password",
      "model": "/models/call_log",
      "logging": false
   },

   "general": {
      "statement": "select ifnull(max(id_source), 0) as max_id from call_log",
      "rows_select": 1000,
      "max_rows": 10000,
      "sleep": 2500,
      "log4js_category": "replica-call_log"
   },
   
   "clean" : {
      "enable": true,
      "table": "call_log",
      "primary_key": "id",
      "last_days": 180,
      "statement": "select ifnull(max(id),0) from :tablename where datediff(curdate(),str_to_date(rtdtm, '%Y-%m-%d')) > :days"

   },
   
   "optimize": {
      "enable": true,
      "table": "call_log"
   },
   
   "mail": {
      "enable": false,
      "host": "localhost",
      "port": 25,
      "user": "user",
      "password": "secret",
      "from": "from@company.com",
      "to": "to@company.com",
      "secure": false,
      "debug": false
   }
   
}


In the configuration file there are the following sections:
 
  • the source, where you can find the parameters of the original xcally database

  • the destination, where you can find the parameters of the destination database

  • the general, where you can specify the max number of rows (max_row parameter) that you want to copy on the destination database, in each single run of the script.

  • the clean section, where you can change, through the enable parameter, if you want or not to delete old records after the copy into the destination database. 
    "enable": true means that the old records will be deleted; set false if you don't want to delete old records.

    Here you can also change, in the last_day parameter, the number of days to maintain in the source database.
    As said before, the data contained into the source database are deleted only if copied with success on the destination database. 
    In case of error the script can send you an email to report the issue (see mail section).

  • in the optimize section you can enable if you want to optimize the table after the copy of the records (see optimize command in MySql documentation)

  • if you want the script to send an email in case of error occurred during the run, you have to activate the mail section ("enable": true) and set the mail parameters. Default: disabled.