Data Retention
On this page |
|
What’s about
Data Retention is a plugin that can be installed on XCALLY server, to manage Database without any systemic expertise.
Given a defined time interval for data retention, the plugin can automatically delete, backup or copy data contained in XCALLY database tables and files, such as voice recordings, voice mail, email and interaction attachments.
This makes managing storage space simple and straightforward, avoiding issues such as: running out of space, stopping services and managing historical data.
Benefits:
Automatically manages XCALLY database
Schedules backup and data transfer activities
Schedules and performs deletion of redundant data
Optimizes database performarce
Reduces systemic expertise for task planning
This plugin has been developed by our Professional Service department. If you are interested in more information, ask to your sales reference.
Installation
The software is executed like an XCALLY Plugin, so it’s distributed as a zip archive to be installed, directly through the admin interface on XCALLY Motion V3, from AppZone → Plugins section.
Once loaded and installed, we can proceed to the plugin interface through Plugins-> Data Retention entry and complete the first configuration.
Plugin configuration
First of all we need to configure the database and the server details.
It is also possible to manually edit the config.json file in the zip archive, but this mode is easier to use.
From the plugin interface:
you can go to configuration tab
insert credentials for motion2 database connection
path of the folder where the database data is going to de dumped
details of your server, including your server reachable url and an admin api key.
Remember to restart the plugin from the AppZone → Plugins section after saving the configuration!
IMPORTANT: Every time you change the database configuration, remember to enable also the Seed, since this allows the plugin to fill the database with the required configurations needed to work properly.
The seed will be deactivated automatically every time the system tries to create those configurations.
TL; DR: Enable the Seed during the first configuration.
How it works
You configure one or more automations, routines that are executed periodically, based on an interval that can take one or multiple values;
For each automation, you can create multiple actions, that will be executed sequentially, when the specified interval for that automation is met. The actions are queued, so the next action is not executed until the previous one has not finished or failed. The queue is common through all the automations, so there is no possibility of two actions of two different automations to be executed at the same time, since all the actions are put in the same queue;
Each action can be based on a table configuration, so for every table, there are specific operations allowed. For example, for some tables you can only dump or delete rows, while for others you also have files, and you can copy/delete/link them;
If the default table configurations are not enough, it’s always possible to create a custom table configuration, specifying your table and then choosing the allowed operations;
Every time the rows from a table are dumped, a dump file is created, and you can download or delete it;
Everything that happens in background is recorded and can be checked in the log files.
Automations
The Automations view allows you to manage your automated routines. From here you can create new automations or modify and delete them.
You can create a new automation by clicking on + button.
Every time an automation is executed, the Last Exec field is updated. In addition, in the Interval column it is possible to check when the next automation execution will take place.
Name: The name of the automation
Interval: When the automation is to be executed. You can choose the basic unit of repetition (Hour, Day, Week, etc..) and then you can select several sub-parts of that unit (e.g., every Hour at both 0 and 30 minutes)
Skip actions if still in execution: As mentioned before, the actions are all queued in a single queue, and executed after the previous one is completed.
In a remote case, it could happen that, while the action is still waiting to be executed, the automation interval is respected and the system tries to insert the action into the queue again. If you want to avoid placing the action in the queue while it is still to be executed, you can switch on this option.Enabled: activate/deactivate the automation
After you create an automation, it’s possible to quick edit it by clicking on 3 dots menu of the specific automation and edit it.
From here you can see automation settings, you can edit it or add/modify/delete actions
Actions
From Actions tab, in the automation settings, you can create new actions, modify and delete them. When the automation interval is met, each action will be added to a queue, and executed sequentially, one after another.
Also here, the Last Exec column is updated when the specific action is executed.
Table Configuration: table configuration you want to use for this action, which includes the table and the types of operations you can perform for that table.
You can choose between the default table configurations or the custom ones (see Custom Table Configurations)
Retention Actions: The operations that are allowed for the specified table. Depending on which operations you select, you’ll have additional fields displayed. The possible actions are:
Remove table rows: Remove the selected rows from the table
Dump Table rows: Dump the selected table content into a sql file
Remove files from disk: Remove the files related to each one of the selected rows on the table
Backup files on disk: Backup the files related to each one of the selected rows from the table
Backup files, create symlink to new files and remove: The same as ‘Backup files on disk’, with the addition that the original file is removed and in its place a symbolic link is created pointing to the backup file (The action automatically includes the backup and the removal part
Destination File Path: If you select an operation that includes the backup of the files, you’ll need to insert the destination folder where your files will be copied.
If you have a network resource, simply mount it as a partition locally and configure it appropriately according to the cloud you use
Keep last: How much data to keep in the database (to be ignored). Basically, as it is a data retention tool, you specify how much data you want to keep, while the remaining data is processed by the application, dumped, removed etc.. (e.g., if you want to dump all data starting from 1 month ago, you can insert 1 Month)
Work on data range: You can choose to work on a limited data range, instead of working on all data prior to the date specified by the Keep Last setting.
If you enable this, you’ll see two new input fields for the Work on setting, with which you specify how much data to consider going back from the Keep Last setting.Conditions: possible conditions to be considered when querying the table to extract the data to be processed. You can leave it blank if you wish to process all data.
Custom Table Configurations
This view allows you to create your custom table configurations, in case, for example the default table configurations for your actions are not enough.
You can create, modify or delete a custom table configuration.
In case a custom table configuration is used in one of your actions, you will not be able to delete it unless you remove the association or delete the action.
Name: name of table configuration
Custom Table: Enable this if you want to use a table different from the default ones on motion2 database. If you use it, you can also indicate Date Search Column, so table column to perform retention operations, when filtering table data
Table: name of database table you want to manage
Supported Retention Actions: The actions supported by this configuration (e.g., backup rows, remove rows, backup files, etc...).
Original File Path: In case you enabled one or more retention actions that require handling files, you’ll need to specify where the file is located. This field supports the table columns autocomplete, meaning that you can specify the name of a column from the table selected, and that value will be inserted.
You can do this inserting the name of the column between double curly brackets like this: {{column_name}}. For example:In the voice_recordings table you have the value column, which specifies the full path where the audio file is located. You can set Original File Path equal to {{value}}.
In the attachments table you have the basename column, which specifies the name of the file for the attachment. You know also that the default path for the attachments is /var/opt/motion2/server/files/attachments.
Knowing this you can set the Original File Path to /var/opt/motion2/server/files/attachments/{{basename}}.
Original File Path Base Folder: If you enable one or more retention actions that require handling files, if your files are organized in a subfolders structure that you want to keep also in the backup folder, you can specify the base path for the original folder, and the system will start from there and create all the subfolders in the backup folder specified in the actions configuration. For example:
By default the audio files for the voice_recordings folder are located under /var/spool/asterisk/monitor.
Now let’s say that you are using a XCALLY configuration for which your files are automatically organized by date, for example a year/month folder and a day folder. (e.g. 2023/05/17). You will obtain something like this: /var/spool/asterisk/monitor/year/month/day (e.g. /var/spool/asterisk/monitor/2023/05/17).
If you also want to keep that subfolders division under the destination folder (/var/opt/recordings_backup/year/month/day/), you’ll need to specify the original base folder from where your subfolders start, so in this case the Original File Path Base Folder will be equal to /var/spool/asterisk/monitor.
Dumps
When table data is dumped, a file with extension .sql is created, in the folder you specified from the plugin configuration tab. You can find the list of all the tables dumps in the Dumps view
From the 3 dots menu on the right you can download the dump file, you can delete it or you can restore (in this case while the restore is running, you will not be able to perform any other restores until the active one ends
Data dumps names are presented in this format:
Name of the dumped table.
Upper datetime limit of the extracted data.
Datetime of when the data has been extracted.
Restores
In restores section, you can view the list of restores with the relative status (as running, error, completed).
In case of error, you can click on 3 dots button and select “Show error” to view more details about it
Logs
The application stores log files for each component and/or function that is executed in the background. You can find these logs in the Log view, where you can check when a specific log is been updated, download, or delete it.
The logs allow you to check the proper functioning of the application, and to eventually check for errors.
You can find logs in folder /var/log/xcally/data-retention
As mentioned, the log files are organized by component and by type, and can be of four types:
<component_name>-combined.log, includes the list of all complete logs for that component;
<component_name>-combined.date.log (e.g. api-combined.2024-01-01.log) : these files gather the log details for the specified component, including both debug and error messages, and for the day specified in the date part of the name. The logs are rotated daily, meaning that you’ll have a new combined log file for each component for each day. The rotation limit is set to 30 days, so after 30 days the oldest logs will be deleted;
<component_name>-error.log, includes the list of all error logs for that component;
<component_name>-error.date.log: these files gather the log details related to errors only, and for the day specified in the date part of the name. The logs are rotated daily, meaning that you’ll have a new combined log file for each component for each day. The rotation limit is set to 30 days, meaning that after 30 days the oldest logs will be deleted.