SageX3 PU9 HousekeepingTasks
SageX3 PU9 HousekeepingTasks
X3 PU9
Document Information
Author: Mike Shaw, Sage UK X3 Support Team
This is not a complete list of all tasks you should perform on your own system, but gives some pointers
to common tasks that would generally be recommended by Sage to be performed
As this document is generic, you will need to adapt it for your own situation. If you need help to
determine which housekeeping tasks are relevant for your specific site, you could consider engaging our
Professional Services Group (PSG) to assist
There are various different components that make up your X3 system, but the most important to know
the versions for are:
a. X3 Patch level
b. Runtime
c. Syracuse Web Server
Luckily you can confirm the versions for all three of these from one screen within X3 itself
On this first screen you can see the “Web Server” (i.e. Syracuse) version, in this case 9.6.34-0
Each X3 folder could potentially have a slightly different version, although for most customers this is not
the case. You can check each folder individually by clicking the link for the relevant folder. For
example, click the “X3PU9TRAIN_SEED_ONLINEDOC” folder name to get the X3 folder version and
runtime version information
2. Create corresponding users for the relevant folders, Parameters Users Users with the
relevant permissions
3. For Batch Server, navigate to Usage Batch Server Recurring Task Management and set the
User Code to your batch user
4. For Web Services, navigate to Administration Administration Web Services Classic SOAP
pools configuration and set the “user” to your web services user. NOTE: the Web Service user
(and language) is just a default setting. When calling web services, the calling program may well
use a different user and/or language setting
Whilst both purging and archiving are optional, it is prudent to consider using either or both these
facilities in order to give best possible performance and to keep disk usage to a minimum
This shows the available Archive and Purge routines. You should review the list and configure according
to your business needs
Only data considered as closed (i.e. that which will not change any more) can be purged or archived
Some routines relate only to purging, some relate only to archiving and some allow both activities
You control how long to keep data with the “Days” setting. Any data that is purgeable (closed) and is
older than the specified number of days, then qualifying for being purged on the next run
“Frequency” controls the gap between Purges. For example if set to 10 days as shown above, then after
a purge run it will not attempt another purge run for another 10 days. It is suggested you set this to 1
day for all tasks you are using, then create a scheduled batch task to perform the purge at an
appropriate frequency for your requirements, for example weekly or monthly.
You should pay particular attention to the purge jobs starting with “A” as most of these will likely need
to be enabled and will apply to most systems and most folders
The ABATCH task is a special case as this applied only to the X3 folder, as it stores the information
relating to the Batch Server itself
As another example, ATRACE is used to manage the log files which is created by most batch jobs. These
files are retained in the folder TRA directory until purged
For the purposes of demonstration, this document will show the setup for purging for these two jobs,
but you should review and decide which jobs are applicable for your own circumstances
Log into the X3 Folder itself and then go to option Usage--> Batch Server--> Recurring Task Management
(GESABA)
Once the task has been saved, you can see the executions of the task by navigating to Usage Batch
Server Request Management
The recurring task may not appear on the list immediately, as it won’t be in the list until the morning of
its first run
This task relates to each folder individually. For our example, we will setup for the X3 Folder itself
Navigate to option Usage--> Batch Server--> Recurring Task Management (GESABA)
Parameters
Repeat these same setup steps for your other folders in this solution
In this case, as the execution is for later on today, we can see the tasks immediately in the Request
Management screen
You should review your data volumes in conjunction with your business requirements and decide which
tables (if any) could or should be archived to achieve these business objectives
Check the Folder setup to confirm after completion. Parameters General Parameters Folders
This task relates to each folder individually. For our example, we will setup for the SEED Folder
Navigate to option Usage--> Batch Server--> Recurring Task Management (GESABA)
You can run the Archive/Purge interactively, by navigating to Usage--> Usage--> Archive/Purge
Connect to your history folder, then do any queries you are interested in to see the historical data
It is IMPORTANT you stop the Accounting Task process before you stop the batch server, and you stop
the batch server process itself before a server restart or before shutting down SQL Server. Not doing so
can cause issues when trying to restart these processes
This task can be automated by scheduling to run at specific times. This process uses a different
scheduler and is setup as described below:
The default settings should be sufficient for the scheduled update, so you can just click the “Schedule
index update” option
Pick the schedule created in the previous step and click the blue tick to save
You should additionally consider when the system backups are taking place as there may be some tasks
which should not be run during these times also
You should therefore draw up a list of times during which it is acceptable to run the batch tasks and
schedule them accordingly. You can consider if you need to enforce these hours using “Hourly
Constraints” and/or “Batch server calendar” for the Batch Server tasks
Navigate to Parameters Usage Batch Server where you will find these options to allow you to
configure allowable dates and allowable days/hours of batch task execution
Once hourly constraints have been configured, you can modify the Task configuration to ensure it
conforms. Navigate to usage Batch Server Task Management and configure the tasks as needed
These two items alone should provide a good guide to the type and frequency of backups that need to
be taken, in order to satisfy these requirements
If you have a multi-server Sage X3 instance (different X3 components spread out across different
servers), you should consider all these servers as one whole in a backup strategy. i.e. you will need to
backup all the servers and perhaps also need to synchronize these backups for some servers
You need to ensure you take SQL database backups and SQL Server log file backups such that any
business recovery objectives are achieved
Essential configuration data and other user data, such as documents, are stored in the Mongo Database,
so you therefore also need to ensure you backup MongoDB database
The file system and Windows registry should also be backed up regularly to ensure you capture regularly
changing files such as log files, and maintain backups of relatively static files, after patching for example
It is therefore important to have a change control procedure that allows you to plan and understand
what changes are applied to any component of your X3 instance or the supporting infrastructure, so
that:
Changes can be applied in a controlled manner
Any issues introduced by any change can be identified and reverted if necessary
The business can understand any risks from proposed changes
Business users can be scheduled to be involved in testing and changes
For some, this process may be as simple as a spreadsheet listing any changes that have been made, but
in other cases there may be formalized systems to request and authorize changes before they are
applied
Change control will often only apply to LIVE instances, although there is an argument for it to also be
applied to TEST instances also
With Sage X3, there are generally multiple patching activities that need to be undertaken to apply a
patch. This is generally controlled by the nature of the main patch and is documented in the patch itself.
For example, when you review the patch documentation for PU9 Patch 5 you will find there are
mandatory pre-patch steps, which include applying the Syracuse 9.5 patch, as well as both Mandatory
and Recommended post-patch activities, such as applying the latest Print Server patch
……
NOTE: when applying a “Hotfix” you should still go through these same steps, as for any other patch.
Even though the impact of a HotFix is likely to be less, you still need to understand the impact and
perform testing to confirm its effect
NOTE : Sage Support strongly advise that you should always apply all the latest Technology patches,
even though some may be flagged as “Recommended” rather than “Mandatory” in the patch
documentation
Once all testing is successfully completed, schedule the patching activity for your LIVE instance.
You will go through similar steps as per the TEST instance:
Ensure all users are logged out
Take a pre-patch backup
Perform pre-requisite tasks
Apply the patches
Perform post-patch tasks
Validate and perform non-destructive testing in LIVE environment
Allow key users onto the system for final checks
Release to all users
From an X3 perspective, this boils down to deciding what data you need to audit, for example failed
logins, updates to key fields on certain tables, etc.
WARNING: the more auditing you enable, the more overhead you create on system performance and
could also potentially generate large amounts of audit data, which then needs to be stored and
managed. You should therefore setup auditing for the minimum amount needed that achieves the
business objectives
Additional notes
Audit data writes to AUDITH and AUDITL tables
Workflow batch task triggers for each line of AUDITH, if workflow option is selected
Use the functions in Usage--> Audit to review the data
Audit data can be purged via Usage--> Archive/Purge
Query back Date range that covers todays date and enter BPCUSTOMER for the table
You will see four rows for the two updates, one before and one after the update. You can drill into the
“Details of fields” from here if you wish
Query back Date range that covers todays date and enter BPCUSTOMER for the table
Also check the “Details of fields”
This shows two records for the two updates, but also has the field information immediately available,
showing the before and after values
For all three components, you may wish to regularly scan the log files for any errors or unusual
messages for further investigation, as a proactive measure to identify potential user issues
Over time, you will find a lot of log files will accumulate and some of the log files will grow quite large. It
is prudent to periodically archive these log files to a different location in order to control disk space
usage and make it easier to use the log files when they are needed
Syracuse
There is no option to change the level of logging so you cannot change the information level in the log
files
There is also no automated way to archive the log files themselves, so you should regularly archive these
logs, for example by using ZIP or similar tool to archive the old logs every month or so. This allows you
to keep the number and size of the log files to a manageable level. The Syracuse service needs to be
shutdown in order to archive the latest log files
The log files are locate in the <SYRACUSE INSTALL DIRECTORY>\Syracuse\logs for example
“C:\Sage\Syracuse\syracuse\logs”
Elastic Search
The Elastic Search configuration file “logging.yml” located in the <ELASTIC SEARCH INSTALL
DIRECTORY>/config allows you to change the level of logging. For example,
“C:\Sage\ElasticSearch\config” By default it has INFO level logging for many components, which is quite
verbose, so on your LIVE installation you may wish to reduce this log level to WARN instead
There is no automated way to archive the log files themselves, so you should regularly archive these
logs, for example by using ZIP or similar tool to archive the old logs every month or so. This allows you
to keep the number and size of the log files to a manageable level. The Elastic Search service needs to
be shutdown in order to archive the latest log files
The log files are locate in the < ELASTIC SEARCH INSTALL DIRECTORY>\logs for example
“C:\Sage\ElasticSearch\logs”
There is no automated way to archive the log file “mongodb.log” and it can grow quite quickly. You
should regularly archive this log file, although you will need to stop MongoDB service in order to do this.
For example use ZIP or similar tool to archive the old log every month or so. This allows you to keep the
size of the log file to a manageable level, so when it is needed for diagnostic purposes it is easy to
manage and search through for relevant messages
The log file is locate in the <MONGODB INSTALL DIRECTORY>\logs for example “C:\Sage\MongoDB\logs”
mongoperf
mongostat
mongotop
You can find the documentation for these tools on the MongoDB web site
https://docs.mongodb.com/manual/
The trouble with this approach is that you may gather a lot of performance data with a performance
problem in-situ, but it may not be clear what is the root cause or even worse you may make incorrect
assumptions as you do not know what is considered “normal” for the performance statistics you are
reviewing
You may wish to consider an alternative approach, which would be to regularly gather performance data
whilst the system is running normally
There are various tools available for both Windows and Linux platforms. For example, Windows
Performance Monitor can be used to schedule the regular gathering of a wide range of system statistics,
including SQL Server information
Sage X3
There are various X3 functions that can be used to check or manage your X3 instance which could be
useful to an X3 System Administrator, although many would only be used when required. The most
notable are discussed below:
This function shows the print jobs currently running and allows users with the appropriate authorization
level to delete tasks or to change their priority
Whilst it can be difficult to analyse the output this function generates, the objective of this function is to
compare the links between tables described in the X3 data dictionary with the actual tables stored in the
Database itself. This process is resource intensive, as it reviews all the data in any tables you choose to
run against, so should only run at quiet times.
WARNING: you should not attempt to correct any standard tables if they are shown as having potential
issues in the output, but instead to log a call with Sage Support to ask for assistance
This routine should complete quite quickly. It provides a report comparing the X3 Data Dictionary
description of the indexes against the indexes that actually exist in the database.
It is important that the table statistics are up to date to reflect the current data volumes and
distribution. By default, this is managed automatically by SQL Server
You can check the database tables’ statistics are being automatically generated and see the last
date/time the statistics were gathered. If needed, you can also use this screen to select certain tables
and then force a new statistics generation for those tables.
These log files are not automatically purged, so you should also regularly monitor the log file usage and
delete as and when these logs are no longer needed
The automatically generated log files are kept for 10 days, but any manually generated ones are not
automatically purged. You should regularly monitor the log file usage and delete as and when these logs
are no longer needed