21.4 AppDynamics - Platform 10 12 2021
21.4 AppDynamics - Platform 10 12 2021
This page provides information on installing, configuring, and administering an on-premises AppDynamics Application Performance Monitoring
(APM) Platform deployment.
Installation Overview
Before you install the platform, review the requirements for the components you plan to install and prepare the host machines. The requirements vary
based on the components you deploy and the size of your deployment.
For the Controller and Events Service, you first need to install the AppDynamics Enterprise Console. You then use the application to deploy the Controller
and Events Service. Note that the Events Service can be deployed as a single node or a cluster. The Enterprise Console is not only the installer for the
Controller and Events Service; it can manage the entire lifecycle of new or existing AppDynamics Platforms and components.
You cannot use the Enterprise Console to perform the End User Monitoring (EUM) Server installation. Instead, you must use a package installer that
supports interactive GUI or console modes, or a silent response file installation.
Follow these tasks before you start the installation process for the AppDynamics APM Platform:
You can get the software for installing the platform components from the AppDynamics download site. See Download AppDynamics Software
The AppDynamics Enterprise Console is a GUI and command-line based application that can manage the installation, configuration, and administration of
the Controller and Events Service.
For the EUM Server, you must continue to use the package installer to deploy the EUM Cloud. See EUM Server Deployment
After you install the platform, you can configure and manage different components with component-specific scripts. Based on how you deploy the platform,
you might use a combination of the Enterprise Console and package installers to install and manage the various components of the platform.
You can find a more detailed diagram, as well as a SaaS architecture diagram on PDFs. For a diagram of the Enterprise Console, see the Enterprise
Console Platforms Architecture. For a diagram of the Synthetic Server Deployment, see the Synthetic Server Deployment Architecture.
Platform Components
The following table describes how the components work together in the AppDynamics platform.
Application
Performan App Server Agents attach to monitored applications and send data to the Controller via connection .
ce
Manageme
nt
Server
Visibility Machine Agents reside on monitored servers and report data to the Controller via connection .
Application
Analytics The Analytics Dynamic Service (formerly called the Analytics plugin) on the App Server agent communicates with a local
Analytics Agent instance. One or more Analytics Agents in a deployment send data to the Events Service via connection . The
Analytics Agent is bundled with the Machine Agent but can be installed and run individually as well.
End-User For an on-premises EUM installation, you configure a connection to the web and mobile real user monitoring agents to the on-premises
Monitoring
EUM Server via connection . The EUM Server sends data to the Events Service cluster via connection . The optional
Custom EUM Geo Server stores EUM Geo Resolution data taken via connection . The optional Synthetic Server receives
synthetic job requests from the Controller, which are then fetched from the Synthetic Services via connection .
Platform Connections
The following table lists and describes the traffic flow between AppDynamics platform components.
APM configuration and metric data in the on-premises Controller MySQL database
EUM event data in the Events Service
Transaction and log analytics data in the Events Service
EUM Geo Resolution data in the on-premises GeoServer
EUM Synthetic data in the on-premises Synthetic Server
After this process, you can perform optional configurations and administrative tasks described in Secure the Platform.
To start the installation or upgrade process, see Platform Requirements for information about requirements and pre-installation tasks.
Related pages:
Platform Requirements
Controller System Requirements
EUM Server Requirements
Events Service Requirements
You can refer to the child pages for the platform requirements and deployment guides.
Irrespective of the server's computing power, installing more than one controller on the same server is not supported.
Based on your AppDynamics deployment, you may use the Enterprise Console for the following tasks:
The following tasks cannot be performed using the Enterprise Console, and therefore, must be performed manually:
For a more detailed description of this path, see Platform Installation Quick Start.
For a more detailed description of this path, see Discovery and Upgrade Quick Start.
(Events Service 20.2+ is backward compatible with the other platform components.)
The Synthetic Server 20.x+ versions are compatible with any of the other component versions 20.x+ and the Synthetic Private Agent 20.x+.
To get the versions of the Events Service, Controller, and Events Service, see Getting Platform Versions. For the EUM Server and Synthetic Server, use
the following endpoints:
http(s)://<on-prem-eum-server_domain-name>:7001/eumcollector/get-version
http(s)://<on-prem-synthetic-server_domain-name>:10101/version
For more information on downloading the software, see Download AppDynamics Software.
The requirements for different components in the AppDynamics platform are based on the performance profile you select. This page describes the different
performance profiles and how to determine the profile size you need.
However, when the Enterprise Console host is not shared with the Controller host, then it requires additional memory and disk space.
See Enterprise Console Requirements and Prepare the Controller Host for additional space requirements.
Network Considerations
Your network or the host machine may have built-in firewall rules that you will need to adjust to accommodate the AppDynamics on-premises platform.
Specifically, you may need to permit network traffic on the ports used in the system. For more information, see Port Settings.
For expected bandwidth consumption for the agents, see the requirements documentation for app agents, see Install App Server Agents.
You can use the following file systems for machines that run Linux:
ZFS
EXT4
XFS
Internationalization Support
The Controller and App Agents provide full internationalization support, with support for double- and triple-byte characters. This support provides the
following abilities:
Controller UI users can enter double- or triple-byte characters into text fields in the UI.
The Controller can accept data that contains double- or triple-byte characters from instrumented applications.
More Information
For requirements that are specific to product components, see the following pages:
When deploying AppDynamics, you may need to open ports in a network firewall or configure a load balancer to enable communication between the
Controller and the rest of the AppDynamics platform.
For SaaS, you only need to adjust your infrastructure to accommodate the HTTPS port provided to you by AppDynamics. For an on-premises deployment,
however, you may need to make additional adjustments based on the information here.
Enterprise Console 9191 Yes. The application uses port 9191 for all traffic.
port
SSH port 22 The port needs to be open between the Enterprise Console and the remote hosts it manages. This is for Unix only
and is not configurable. If you have a requirement to configure the port, contact AppDynamics support.
Events Service 9080 If the Events Service and Controller are on different hosts, you need to configure the port in the firewall or load
REST API port balancer.
Events Service 9081 If the Events Service and Controller are on different hosts, you need to configure the port in the firewall or load
REST API admin balancer.
port
EUM server port 7001 If EUM and the Controller are on different hosts, you need to configure the port in the firewall or load balancer.
(HTTP)
EUM server SSL 7002 If EUM and the Controller are on different hosts, you need to configure the port in the firewall or load balancer.
port (HTTPS)
At installation time, you can enter different ports manually. After installation, you can change the port settings by either reinstalling the Controller or by
editing the port configuration as defined on the Enterprise Console Configurations page or in the underlying GlassFish application server, as described
in the following sections.
You can also edit the ports manually by editing configuration files used by the application server for the Controller domain. Updating the ports manually,
however, will cause the Enterprise Console to have no visibility into the updates and cause health-check errors.
The following sections list the settings you need to modify to change a port.
1. In domain.xml, change the database listening port where it appears under the jdbc-connection-pool
element named controller_mysql_pool. It appears as the value of the property named portNumber
.
2. Edit the file appserver/glassfish/domains/domain1/imq/instances/imqbroker/props/config.properties to change the "imq.
persist.jdbc.mysql.property.url" variable so that it includes the new port number. This variable is the JDBC connection string.
3. In db/db.cnf, set the "port=" variable to your new port setting.
4. In bin/controller.bat (.sh), change the "DB_PORT" variable to your new port setting.
The following pages describe considerations and instructions for deploying the Controller on physical machines.
Related Pages:
This page describes the common configuration, tuning, and environment requirements for the machine that hosts the Controller.
These considerations apply whether the machine runs Linux or Windows or is a virtual machine. For specific considerations for your operating system
type, see the related pages links.
MySQL Conflict
Certain Linux installation types include MySQL as a bundled package. No MySQL instances other than the one included in the Controller host should run
on the Controller host. Verify that no such MySQL processes are running.
Verify the size of virtual memory on your system and modify it if it is less than 10 GB. Refer to the documentation for your operating system for instructions
on modifying the swap space or Pagefile size.
Disk Space
In addition to the minimum disk space required to install the Controller for your profile size, the Enterprise Console writes temporary files to the system
temporary directory, typically /tmp on Linux or c:\tmp on Windows. The Enterprise Console requires 1024 MB of free temp space on the controller host.
On Windows, in case of an error due to not meeting the above requirement, you can set the temporary directory environment variable to a directory with
sufficient space for the duration of the installation process. You can restore the setting to the original temp directory when the installation is complete.
Network Ports
Review the ports that the Controller uses to communicate with agents and the rest of the AppDynamics platform. For more information, see Port Settings.
Note that on Linux systems, port numbers below 1024 may be considered privileged ports that require root access to open. The default Controller listen
ports are not configured for numbers under 1024, but if you intend to set them to a number below 1024 (such as 80 for the primary HTTP port), you need
to run the Enterprise Console as the root user.
This page describes the configuration requirements and considerations for using a Linux system as a Controller host machine.
read, write and execute permissions on the directory where you install the Controller
write permission on the /etc/.java/.systemprefs directory
If you are installing other AppDynamics Platform server components, such as the EUM Server or Application Analytics Processor, on the same machine, it
is recommended that you perform the installation as the same user or a user with the same permissions on the target machine.
Virus Scanners
Configure virus scanners on the target machine to ignore the AppDynamics Enterprise Console directory and database directory (or simply the entire
Controller directory). Code is never executed from the data directory, so it is generally safe to exclude this directory from virus scanning. The default
location of the data directory is <controller_home>/db/data.
Also configure virus scanners to trust the Controller launcher, database executable, reporting service launcher, and events service (analytics processor)
launchers. The launcher names are:
Anti-virus Exclusions
If you are running an antivirus program on your Linux system, it must meet one of the following conditions:
The anti-virus program is read-only; it only detects and reports issues but never modifies files
The anti-virus program excludes the MySQL data directory (datadir), which is often set to the path db/data.
If the program does not meet either of those conditions, it can randomly corrupt the MySQL database and hence the controller.
For example, you can install the package that includes netstat with the following command on CentOS:
libaio Requirement
The Controller requires the libaio library to be on the system. This library facilitates asynchronous I/O operations on the system. Note if you have a
NUMA based architecture, then you are required to install the numactl package.
Install libaio on the host machine if it does not already have it installed. The following table provides instructions on how to install libaio for some
common flavors of Linux operating system.
For RHEL8, CentOS8, and Amazon2, you install the ncurses-libs-5.x library using a rpm file downloaded from a trusted source:
Note: The ncurses-libs depends on the ncurses-base so you should install the ncurses-base first.
http://mirror.centos.org/centos/7/os/x86_64/Packages/ncurses-base-5.9-14.20130511.el7_4.noarch.rpm
http://mirror.centos.org/centos/7/os/x86_64/Packages/ncurses-libs-5.9-14.20130511.el7_4.x86_64.rpm
You must either create symlinks for ncurses-libs-5 which points to ncurses-libs-6, or install the ncurses-compat-libs pac
kage, to provide ABI version 5 compatibility.
RHEL8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5
CentOS8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5
Amazon2 symlink:
sudo ln -s /usr/lib64/libncurses.so.6.0 /usr/lib64/libncurses.so.5
sudo ln -s /usr/lib64/libtinfo.so.6.0 /usr/lib64/libtinfo.so.5
RHEL8 compat-libs:
sudo yum install -y ncurses-compat-libs
CentOS8 compat-libs:
Amazon2 compat-libs:
sudo yum install -y ncurses-compat-libs
Note: For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6.
Debian Use a package manager such as APT to install the library (as described for the Ubuntu instructions above).
tzdata Requirement
Ubuntu version 16 and higher requires the tzdata package in order to install the Enterprise Console and Controller.
Warning in database log: "Could not increase number of max_open_files to more than xxxx".
Warning in server log: "Cannot allocate more connections".
To check your existing settings, as the root user, enter the following commands:
The output indicates the soft limits for the open file descriptor and soft limits for processes, respectively. If the values are lower than recommended, you
need to modify them.
Where you configure the settings depends upon your Linux distribution:
If your system has a /etc/security/limits.d directory, add the settings as the content of a new, appropriately named file under the
directory.
If it does not have a /etc/security/limits.d directory, add the settings to /etc/security/limits.conf.
If your system does not have a /etc/security/limits.conf file, it is possible to put the ulimit command in /etc/profile. However,
check the documentation for your Linux distribution for the recommendations specific for your system.
1. Determine whether you have a /etc/security/limits.d directory on your system, and take one of the following steps depending on the
result:
If you do not have a /etc/security/limits.d directory:
a. As the root user, open the limits.conf file for editing:
/etc/security/limits.conf
b. Set the open file descriptor limit by adding the following lines, replacing <login_user> with the operating system username
under which the Controller runs:
/etc/security/limits.d/appdynamics.conf
b. In the file, add the configuration setting for the limits, replacing <login_user> with the operating system username under
which the Controller runs:
This step is not required for RHEL/CentOS version 5 and later. The below file has been combined into /etc/pam.d/system-auth,
and already contains the required line.
/etc/pam.d/common-session
When you log in again as the user identified by login_user, the limits will take effect.
The following table lists one operating systems and the commands to install the required libraries:
CentOS 6.1, 6.2; CentOS 6.3, 6.4, $ yum install fontconfig freetype urw-base35-fonts
and 6.5, Fedora 14
$ yum groupinstall hebrew-support
See Administer the Reporting Service for information on configuring the service.
GNU C Libraries
The Reporting Service requires GLIBCXX_3.4.9 or later and GLIBC_2.7 or later to run.
This page provides operational and setup guidelines for running the Controller on Windows.
If the host Windows machine is the Enterprise Console host, the Windows user account is configured to run with administrative privileges by
default.
Virus Scanners
Configure virus scanners on the target machine to ignore the AppDynamics Enterprise Console directory and database directory (or simply the entire
Controller directory). Code is never executed from the data directory, so it is generally safe to exclude this directory from virus scanning. The default
location of the data directory is <controller_home>\db\data.
Also configure virus scanners to trust the Controller launcher, database executable, reporting service launcher, and events service (analytics processor)
launchers. The launcher names are:
For details on how to view services and exclude directories in Windows Defender, refer to the documentation for your version of Windows.
The data directory does not contain any files that are meaningful to the indexer, so it can be excluded from indexing. To exclude the directory from
indexing, you can add the directory to the excluded directories list in the Indexer Control Panel, disable indexing in directory preferences, or stop the
indexing service entirely.
To add the directory to the excluded directory list, follow these steps:
1. From the Control Panel Indexing Options dialog, click the Modify button.
2. In the Indexed Locations dialog, navigate to and select the Controller data directory.
3. Clear the checkbox for the data directory and click OK.
Windows Update
Configure the Windows Update preferences so that the server is not automatically restarted after an update. To configure the restart policy:
1. Open the Local Group Policy Editor dialog (search for and run the gpedit executable).
2. Navigate to the Windows Update component. In the tree, you can find it under Local Computer Policy > Computer Configuration >
Administrative Templates > Windows Components.
3.
.NET Framework
Components of the .NET Framework 3.5 are required to allow the Controller to be installed as a Windows service on the target machine. The installer
checks your system and indicates if .NET 3.5 is not found. Follow the instructions on the Enterprise Console to get the required components.
Even if you have the latest version of .NET installed, you still have to install .NET 3.5. This is due to a Glassfish requirement where the
Glassfish launcher explicitly requires .NET 3.5.
Windows 7
The Controller is automatically installed as a Windows service. Windows 7 operating system must have the hotfix described in http://support.microsoft.com
/kb/2549760. This hotfix ensures that the Windows registry modifications made by the installer to extend the default service timeouts work as expected.
The installer checks for the presence of the hotfix and warns you if it is not found.
This page provides an overview of how to configure and administer Controller data storage.
Design of your applications (all metadata about business transactions, tiers, policies, and so on)
History of the performance of your applications (metric data)
Transaction snapshot data and events
History of incidents that occurred (both resolved and unresolved incidents)
By default, the database files and data are stored in: <controller_home>/db.
When attempting to access the data, the Controller reads the database user password from these sources and in the priority shown:
If you do not keep the password in the environment variable, you will need to supply it in response to a command-line prompt whenever performing an
operation that involves accessing the database, starting the database, stopping the database, or logging into the database.
If you are using symlinks, you must create the symlink outside of the root Controller install directory and move the data directory to the new volume after
you install the Controller.
Warning: Do not mount a file system on <controller_home>/db/data. During Controller upgrade, the Enterprise Console moves the data directory to
data_orig. Upgrade will fail if the Enterprise Console cannot complete this move.
You can also update the datadir path on the Controller Database Configurations page of the Enterprise Console GUI.
datadir
tmpdir
log
slow_query_log_file
3. Copy (or move) the existing data directory <controller_home>/db to the new location.
For example, to copy the data on Linux:
cd <controller_home>/db/
cp data <new-location>
Related pages:
AppDynamics strongly recommends that you perform routine data backups of the Controller.
One method of maintaining backups of the Controller is to implement high availability. With high availability, the database on the secondary Controller
keeps a replicated copy of the data on the primary Controller. A secondary Controller also makes it practical to take cold copies of the Controller data,
since you can shut down the secondary to copy its data without affecting Controller availability. For information on HA, see Controller High Availability (HA).
Other approaches include using a disk snapshot mechanism or using database backup tools. The BackupTools section describes tools that support each
approach. In addition to regular backups, back up the Controller and Enterprise Console before upgrading or migrating them from one server to another.
This page provides an overview of the tasks and considerations related to backing up the Controller. Note that your Controller should be shut down before
performing any import functions.
It is to be noted that controller versions 4.3 and later will work only on restoring and backing up the <Controller Home>/.appd.
scskeystore file.
Backing up the entire system each night may not be feasible when dealing with the large amount of data typically generated by a Controller deployment.
To balance the risk of data loss against the costs of performing backups, a typical backup strategy calls for backing up the system at different scopes at
different times. That is, you may choose to perform partial backups more frequently and full backups less frequently.
A possible backup strategy may be to perform a level 1 and level 2 backup very frequently, say nightly, and a level 1 and level 3 backup about once a
week. In addition to performing a level 1 or 2 backup, you should also back up the data for the Enterprise Console with mysqldump on a regular basis. A
level 3 backup also backs up the Enterprise Console data. See Enterprise Console Back Up and Restore for more information.
To perform this type of backup, simply copy everything in the Controller installation directory EXCEPT the data directory.
While it is recommended that you copy the entire Controller home except for the data directory when performing a light backup, particularly before
performing a Controller upgrade, there are scenarios in which you may wish to copy only site-specific configuration files. This may be the case if you are
migrating an existing Controller configuration to a new Controller installation, for example. For a list of those files, see Migrate the Controller.
Some third-party backup tools, such as Percona XtraBackup, do not rely on transactions so you can perform a hot backup of your system (that is, back up
the Controller database while it is running).
1. A cold backup of all three directories (Controller install directory, MySQL datadir, and JRE directory). To perform a cold backup, shut down the
Controller app server and database. Then, create an extra copy of the three directories using the cp -r command, the tar utility, rsync, or
others.
2. A hot backup, which means the Controller is running.
a. If you have a high availability setup for the Controller, you can shut down the database on the secondary Controller. Then, you can
perform a cold backup on the secondary Controller and restart the database.
b. If you do not have a high availability setup for the Controller, use a third-party tool such as Percona XtraBackup to back up the MySQL da
tadir. Then, use the cp -r command, the tar utility, rsync, or others to back up the Controller install directory and the JRE
directory.
Percona XtraBackup can fail to hot backup Controllers that are too busy. To avoid this error,
Backup Tools
This section lists a few third-party tools that you can use to back up Controller data. The list is not exhaustive; you can use any tool capable of backing up
MySQL data with the Controller. It is up to you to test your backup and restore process. However, the tool you decide on should back up the data as binary
data.
Percona XtraBackup
An alternative to using a database backup tool is to use a disk snapshot tool to replicate the disk or partition on which the Controller data resides. Options
include:
ZFS volume manager. For more information, see Using ZFS methods for data backup.
Details for performing this type of backup are beyond the scope of this documentation. For more information, refer to administration documentation
applicable to your specific operating system.
While mysqldump is not recommended for use on large data tables, such as the Controller metric data tables, it is useful for backing up Controller
metadata. Metadata defines the monitored domain for the Controller, including applications, business transactions, alert configurations, and so on.
The following instructions assume that the binary path for the Controller's MySQL instance is in the PATH variable. The path to the Controller's instance of
MySQL must precede any other MySQL path on your system. This prevents conflicts with other database management systems on your machine, such as
a MySQL instance included by default with Linux.
The database binary files for the Controller database are in <controller_home>/db/bin.
Before using mysqldump, first ensure that the Controller app server is stopped. If you attempt to run mysqldump while the app server is
running, it will severely degrade the performance and stability of the Controller.
To use mysqldump, run the mysqldump executable, passing the root username, password, and output file. The executable is located in the following
directory:
For a full example that shows which tables to exclude for a metadata backup, see the contents of the metadata backup script described in the next section.
Linux: ControllerMetadataBackup.sh.txt
Backing up the Controller with a custom MySQL data directory location using mysqldump will result in an incomplete and unusable metadata
dump. The default location for the Controller's MySQL data directory is appd_install_dir/db/data. If you do not see the data directory
here then that means an alternate directory or mount point was selected and configured for your MySQL data directory.
Shut down your Controller before using the following command to import the data into a database:
It will overwrite the tables. You can clone your installation to another host and test your restoration there.
A Controller that is installed with a custom MySQL data directory location requires additional flags.
appdynamics-backup.sh.txt
Rename the script (by removing the .txt extension). In the script:
Verify or edit the values of the CONTROLLER_HOME and DESTINATION variables at the beginning of the script for your environment.
Edit the if/then/else clause at the end of the script if you want to implement backup file rotation, call your enterprise backup system to pick up the
compressed Controller database image, or send an alert if the backup fails for any reason.
# decompress the backup image and apply the log taken during backup
CONTROLLER_HOME=/path/to/AppDynamics/Controller && cd /path/to/big/staging/folder \
&& innobackupex --decompress --parallel=16 . && innobackupex \
--defaults-file=$CONTROLLER_HOME/db/db.cnf --use-memory=1GB --apply-log --parallel=16 .
For more information on these options, see the Percona innobackupex option reference.
To perform a cold backup, simply shut down the Controller and back up the data directory located in <controller_home>/db.
This page discusses best practices for managing disk space for the MySQL database used by the Controller.
Before it reaches that point, the Controller displays a low disk space alert in the UI and writes an error level event to server.log. The point at which the
Controller generates the alert depends on its profile, as follows:
The Controller shuts itself down when there is less than 1 GB on the disk regardless of the Controller profile type.
It's important to note that the Controller monitors the disk or partition that it is installed on. If the Controller data resides on a different disk or
partition from the Controller home directory, you will need to monitor available space on that disk or partition separately.
To manage how much disk space the Controller database uses, you can change the amount of data retained in the Controller database. See Database
Size and Data Retention.
This page provides both the on-premises and SaaS default data retention periods for data stored by the Controller and instructions for modifying on-
premises values. AppDynamics manages SaaS Controller deployments, which eliminates the need for manual modifications.
Related pages:
This page describes how to migrate a Controller from a physical or virtual machine (VM) to a new physical machine.
Before Starting
Migrating the Controller often results from the need to move the Controller to new hardware due to increased load. Before starting, make sure that the new
hardware meets the AppDynamics requirements as described in Controller System Requirements. Specifically, you should review the Controller hardware
performance profiles and the hardware requirements per profile information to verify that the target Controller hardware meets the RAM size and Disk I/O
requirements.
You will need to update the MAC address associated with your license since licenses are tied to the machine MAC addresses. You can also
acquire new license files for the new Controller hardware. Send the MAC addresses to salesops@appdynamics.com and request a new license
file or two new licenses, if upgrading to an HA pair. See Apply or Update a License File for more information.
If you are performing a migration and upgrade for a 4.3 version Controller, you should first migrate the Controller. Then, you can upgrade the Controller to
4.5 or higher by installing the Enterprise Console and using the Discover & Upgrade feature. This also applies to migrations involving different OS
environments.
VMotioning, or migrating a VMware guest with a running Controller inside it from one host to another, is not supported. Doing so will lead to
dropped metrics and UI performance problems.
You add the new host as an HA pair to the old host, set the new host as active, and then remove (decommission) the old host. When finished, the
Controller will run on the new host.
Before starting, you should review the requirements and concepts related to Controller High Availability.
b. Select the remove binaries option. (Do not select Remove entire cluster.)
The Controller is now running on the newly provisioned host.
You can keep the same access key from the old Controller. To migrate or update your access key, see Controller Secure Credential Store.
Before starting, you should review the requirements and concepts related to Controller High Availability.
1. Install the Enterprise Console on the new Controller host from where you are running the existing Controller host.
2. Use the Enterprise Console to install the same version of the Controller on your new Controller host using the same passwords as on the existing
Controller host.
3. Shut down the Controller Appserver and database on both Controller hosts.
4. Copy <controller_home>/.appd.scskeystore and the Controller's MySQL datadir from the old host to the correct locations on the new
host.
When migrating the data, ensure that the destination MySQL version is the same as the source version.
You can keep the same access key from the old Controller. To migrate or update your access key, see Controller Secure Credential Store.
The following are considerations and requirements for the machine hosting the Controller.
The Controller is not supported on virtual machines with oversubscribed physical CPUs like T2 AWS instances. These Burstable Performance
Instances are CPU-throttled and do not have dedicated storage bandwidth. We recommend you use a Fixed Performance Instance type instead.
On VMWare VMs
On Hyper-V VMs
On Microsoft Hyper-V, "Dynamic Memory" needs to be disabled and "Static Memory" needs to be enabled.
You cannot enable or disable Dynamic Memory if the VM is in either the Running or Saved state.
Licenses on VMs
The Controller license is bound to the MAC address of the host machine. To run the Controller on a virtual machine, you must ensure that the host virtual
machine uses a fixed MAC address.
Virtual machines, in particular, are less apt to provide sufficient I/O performance requirements for the Controller, compared to similarly specified physical
machines. In any case, whether you are deploying the Controller to a physical or virtual machine, you need to ensure that the machine meets the I/O
performance requirements set forth in Controller System Requirements.
The following example shows an entry in /etc/hosts with the IP 21.43.65.987, the fully qualified hostname application1.mycompany.com and the
alias app1:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html.
The AppDynamics Controller is certified to run on an AWS environment with the Aurora Database. This page provides information on installing,
configuring, and administering a Controller deployment on AWS with Aurora Database.
Installation Overview
You can deploy a medium- or large-scale Controller in AWS using Aurora. Aurora provides higher performance than MySQL, which allows you to scale
your Controller to handle more metrics.
Before you install the Controller, review the requirements for the components you plan to install and prepare the host machines. The requirements vary
based on the components you deploy and the size of your deployment.
You can manually deploy your Controller on AWS. See Deploy the Controller on AWS
This deployment requires attention and time because you have to set up all of the configurations. However, this gives you freedom to customize your
deployment.
You use the Enterprise Console to deploy the Controller by specifying Aurora as the database type. The Enterprise Console is the installer for the
Controller, and you can use it to manage the entire lifecycle of new or existing AppDynamics Platforms and components.
You can get the software for installing the platform components from the AppDynamics download site. See Download AppDynamics Software
Configure an Application Load Balancer in front of the Controller and ensure SSL terminates at the Elastic Load Balancer.
With Amazon Relational Database Service (RDS), you no longer need to worry about database backups, as it takes care of this for you. Also, you no
longer need to implement high availability (HA) on your own, since you can instead leverage the Standby Replica that Aurora/RDS offers and the Aurora
database is horizontally scalable. With the multi-AZ deployment option, Aurora offers 99.95% availability.
Related pages:
This page provides considerations and requirements for AWS instances that host the Controller.
Instance Sizing
For AWS instance sizing by metric ingestion rate, see the Controller Sizing table.
The actual metrics generated can vary greatly depending on the nature of the application and the AppDynamics configuration. Be sure to validate your
sizing against the metric ingestion rate before deploying to production.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html.
This page describes the procedure used to deploy the AppDynamics Controller to an AWS environment.
You can set custom configurations when you manually deploy the Controller to AWS. From these steps, you manually set up security groups, database
parameter groups, Amazon Relational Database Service (RDS), Aurora DB instance, EC2 instance, Elastic Network Interface (ENI) for the Controller, DNS
CNAMEs, and listeners for your load balancer.
Afterward, you install the Enterprise Console, and then use the Enterprise Console to install the Controller and configure it for the AWS environment.
Based on the Amazon Machine Image (AMI), you can provision a replacement EC2 instance for the Controller as needed.
For help with your deployment, contact your AppDynamics account, sales, or professional services representative.
If the Controller EC2 instance stops almost immediately after you install the Controller, there may be an issue with your EBS devices. AWS may report that
it is not able to boot from the volumes. If the EC2 machine stops, check and update your EC2 volumes so that they are mounted correctly.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. For each security group, you add rules that control the
inbound traffic to instances, and a separate set of rules that control the outbound traffic.
At a minimum, we recommend creating the following security groups when deploying AppDynamics in AWS using Aurora DB.
You can create additional security groups to align with your organization's standards.
Inbound rule: Allow all inbound TCP traffic on ports 22 and 9191
Outbound rules:
Inbound rules:
Inbound rule: Allow inbound traffic on port 3388 from appd-appserver-security-group and appd-ec-security-group
You must modify some of the parameters for your database instance.
innodb_file_format Barracuda
innodb_lock_wait_timeout 180
innodb_max_dirty_pages_pct 20
lock_wait_timeout 180
log_bin_trust_function_creators 1
max_allowed_packet 104857600
max_heap_table_size 1610612736
query_cache_type 0
sql_mode 0
tmp_table_size 67108864
wait_timeout 31536000
1. Navigate to the Parameter groups page on the Amazon RDS in the AWS console.
2. Click Create parameter group on the top right of the page.
character_set_client utf8
character_set_connection utf8
character_set_filesystem binary
character_set_results utf8
character_set_server utf8
collation_connection utf8_general_ci
collation_server utf8_unicode_ci
innodb_default_row_format DYNAMIC
innodb_file_per_table 1
lower_case_table_names 1
You launch an Amazon RDS Aurora DB Instance from the AWS console.
2. Select the desired DB Instance class. See Prepare the AWS Machine for the Controller for instance sizing information
3. Select Create Replica in Different Zone to have Amazon RDS maintain a synchronous standby replica in a different Availability Zone than the
DB instance. If a planned or unplanned outage of the primary occurs, Amazon RDS will automatically fail over to the standby replica.
5. Select the Default VPC and subnet group. The DB should not be accessible publicly, so select No for this option.
6. For the security group, select the appd-db-security-group that you created previously.
7. For the database options, you do not need to specify the DB cluster identifier, nor enter a database name because the installer creates the
necessary databases for you.
8. Use the default database port of 3388.
9. Specify the custom parameter groups that you created previously.
During installation, AppDynamics must create additional databases and users in the Aurora database for the AppDynamics Controller application to
interact with the Aurora database server.
Resulting output:
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------
Grants for admin@%
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER, SHOW
DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE
VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, LOAD FROM S3, SELECT INTO
S3, INVOKE LAMBDA ON *.* TO 'admin'@'%' WITH GRANT OPTION
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
---------------------------------------------------
1 row in set (0.00 sec)
5. Apply the grants (listed in the output) for the new root user that you created in Step 1. The root user will have the same grants as the admin us
er.
mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, REFERENCES, INDEX, ALTER,
SHOW DATABASES, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT,
CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, LOAD FROM S3, SELECT
INTO S3, INVOKE LAMBDA ON *.* TO 'root'@'%' WITH GRANT OPTION
Resulting output:
6. Once the root user has the same privileges as the primary username admin, verify that you can log in to the database as root, and then
continue with the installation.
If you do not have users "root@x.x.x.x" and "root@ip-x-x-x-x.ec2.internal", ignore these users and continue to work with the root@%.
After installation, you can revoke the primary-level privileges from the Aurora root user without interfering with the Controller. However, primary-level
privileges for Aurora root user are required prior to upgrading the Controller.
This example uses an Amazon Linux AMI (which is provided by AWS: Amazon
Linux AMI 2017.09.1 (HVM), SSD
Volume Type - ami-f63b1193). The Amazon Linux AMI is an EBS-backed, AWS-supported image.
The default image includes AWS command-line tools: Python, Ruby, Perl, and Java.
The repositories include: Docker, PHP, MySQL, PostgreSQL, and other packages.
5. To launch the instance, you must specify a key pair. If you are using an existing key pair, ensure that you have access to the private key file;
otherwise, generate a new key pair and download it from the AWS console. The private key file is required to connect to the instance using SSH.
The new instance should be available after a few minutes. Once the instance is available, you can verify the status using the AWS console.
Connect to it through SSH, enter:
6. Substitute the appropriate path and filename for your private key file.
You need to introduce an ENI which acts as a secondary network interface that you can attach and detach from the Controller EC2 instance.
You then have to associate the AppDynamics license file to the MAC address of this network interface instead of the MAC address of the underlying EC2
instance.
AppDynamics recommends that you create aliases used to connect the Aurora database instance to the Controller EC2 instance.
For example:
You should use these aliases when you install the Controller through the Enterprise Console. Using aliases prevents you from tightly coupling the
Enterprise Console with the specific Aurora DB instance or EC2 instance hosting the Controller.
For example, if the database were to fail completely, and you needed to restore the database from a snapshot, then you would have a new DNS name for
the Aurora DB instance. Pointing the Enterprise Console at an alias, instead of the DNS name for the Aurora DB instance itself, allows you to only update
the DNS alias, and leave the Enterprise Console configuration unchanged.
However, if you need to move the Controller to a different EC2 instance, which resides in another Availability Zone (AZ), you can just update the DNS alias
to point to the ENI in the new AZ.
For purposes of testing an AWS configuration, it may not be possible to increase DNS aliases. As a result, you can just add entries to the /etc/hosts file
on both the Enterprise Console and Controller EC2 instances. For example:
172.31.17.84 appdcontroller
172.31.25.80 appd-database
This page describes how to install the the Enterprise Console in an AWS environment using an EC2 instance.
You then use the Enterprise Console to install the Controller on a separate EC2 instance (using Aurora DB as
the backend).
The default image includes AWS command-line tools: Python, Ruby, Perl, and Java.
The repositories include: Docker, PHP, MySQL, PostgreSQL, and other packages.
1. Select the Instance type. The Enterprise Console has only modest requirements, therefore you can select the t2.medium instance type.
The new instance should be available after a few minutes. Once the instance is available, you can verify the status using the AWS console. Connect to it
through SSH, enter:
Then, substitute the appropriate path and filename for your private key file.
Then, SSH to your EC2 instance and run the installer to install Enterprise Console:
cd /data
chmod 700 platform_setup.sh
./platform_setup.sh -c
While installing the Enterprise Console, you are prompted to either select a database port, or accept the default port of 3377.
Do not use port 3388 because it conflicts with the Controller database port which is used later in the installation process.
You must have write access to the Enterprise Console installation directory you select.
When installing one Controller in the AWS environment, it is easier to install both the Controller and Enterprise Console on the same host.
However if you plan to install multiple Controllers and want to manage them through a single Enterprise Console instance, then you should install the
Enterprise Console and the Controller on separate hosts.
Complete the installation of Enterprise Console, and make a note of any passwords you specify during the installation process.
This page describes how to install the Controller in an AWS environment using an EC2 instance for the Controller Appserver, and an AWS RDS Aurora
instance for the database. This page uses the Enterprise Console that you previously installed.
Before you install the Controller using Aurora as the database, you must adjust the time zone of the Aurora database to match the time zone of
the Controller server. By default, AWS sets the time zone equal to the UTC time zone. See updating the Aurora RDS time zone
To install the Controller in an AWS environment using an EC2 instance for the Controller Appserver, and an AWS RDS Aurora instance for the database:
cd ./appdynamics/platform/platform-admin/bin
./platform-admin.sh create-platform --name <platform_name> --installation-dir /data/appdynamics/platform
/product
2. Add a new host to the platform and install the Controller on the same host as the Enterprise Console:
3. Install the Controller using Aurora as the database. Substitute the appropriate values for the admin user name, passwords, and Controller
host and port. Ensure that databaseType is set to Aurora, and use the private DNS name of the network interface attached to the
Controller EC2 instance instead of the DNS name for the EC2 instance itself.
The installer connects to the Aurora DB instance, and creates the necessary databases, tables, and other objects. After a few minutes, the
Controller should be installed and ready to use.
Glassfish Configuration
You can configure domain protocols, network listeners, transports, and thread pools from the Enterprise Console UI. You can edit them from the
AppServer Configurations page by selecting the platform, and navigating to Configurations > Controller Settings > Appserver Configurations.
domain protocols.txt
domain network listeners
domain transports.txt
domain thread pools.txt
The Enterprise Console restarts the Controller after you submit your configurations.
You can configure a load balancer after you have installed the Controller, and it is running in EC2. The load balancer distributes traffic across multiple ports
on the Controller.
If using HTTPS, an SSL certificate should be available. For testing, you can generate a self-signed certificate using Open SSL (described here). You can
then import that certificate using AWS Certificate Manager (ACM).
You can create a load balancer to align with your organization's standards.
1. Navigate to the Load Balancers page on the EC2 Dashboard in the AWS console.
2. Select Create Load Balancer at the top left of the page.
3. Select Application Load Balancer as the load balancer type to terminate SSL at the ELB.
4. Enter a name for the new load balancer, and select HTTPS (Secure HTTP) as the load balancer protocol to accept HTTPS traffic only.
5. Select the availability zones to enable for the new load balancer. It should include the availability zone in which the Controller Appserver EC2
instance resides.
6.
8. For the initial configuration, set the load balancer to route all traffic to port 8090 using HTTP, and define the standard health check for the
Controller.
10. Launch the new load balancer. It may take a few minutes before the load balance occurs.
11. Verify that you can access the Controller UI through the load balancer.
If your SSL cert is self-signed, a browser warning displays. You can ignore this warning if you are testing the UI. However for Agent
traffic, you need a valid certificate that is trusted by the Agent.
http://<hostname>:<port>
Depending on the type of request, you can define additional rules to route traffic to various ports on the Controller. This table defines rules that users
typically include for their load balancer:
http-thread-pool Default / User Traffic 8090 Default thread pool for all other traffic
restui-default-thread-pool /controller/restui/* 8095 Default thread pool for all restui traffic
For example, you create a target group for the metrics-thread-pool with these settings:
You can use the health check path for all of the target groups, however you may want to decrease the frequency because performing the same
check on all ports every 30 seconds is not required.
Register Targets
1.
2. After the target groups have been defined, you can add new listener rules to map the traffic to the appropriate target group (based on the path
requested):
The order of the rules is important because some paths may match multiple rules.
You can create an Amazon Machine Image (AMI) to recover from an unexpected event or create a new Controller from the same image.
The AMI image should be created for the Controller, including required optimizations, and an auto-scaling group should be created based on the AMI. You
can configure the Elastic Load Balancer (ELB) to send traffic to instances in the auto-scaling group, which will consist of one EC2 instance at a time.
Creating an AMI and auto scaling group involves the following steps:
bin/platform-admin.sh stop-platform-admin
This ensures that all jobs and services are stopped in order to create a clean image.
A new AMI should be created whenever the Controller version is upgraded, or configuration changes are made. This ensures that the changes
are propagated to the new EC2 instance, in the event that you need to move to a different availability zone, for example. At the same time,
previous AMI instances should be retained as well, in case a rollback is required.
1. Stop the Controller Appserver once optimizations have been applied to it.
2. Click Create Image to create an AMI for the Controller.
1. Select the appropriate instance type for the app server of your AMI.
4. Associate it with the existing security group for AppDynamices application servers.
1. Specify a name for the auto scaling group, enable traffic from the load balancer, and map the target groups created earlier.
Start with zero instances, as you will need to add your existing instance to the group later.
Keep the group at its initial size of zero, as you do not want to add more than one Controller at any point in time.
You have the option to migrate an existing 4.4.3 or latest on-premises Controller database to Aurora DB. The Controller might already reside in AWS, or in
your data center.
Although the Controller already contains a MySQL database, you are recommended to migrate the MySQL database to Aurora because it offers
replication, high availability, and elasticity. The Amazon Relational Database Service (RDS) tool handles provisioning, patching, backup, recovery, failure
detection, and repair of the database. Also, Aurora DB offers encryption at rest, with encryption of all automated backups, snapshots, and replicas in the
same cluster.
If your Controller is not already in AWS, then follow Migrate the Controller to migrate it. Once this is complete, you should have a Controller running on one
or two EC2 instances in AWS, depending on whether or not your existing Controller deployment is high availability (HA), with the MySQL database hosted
on those instances.
Commands to start and stop the database do not work with Aurora DB.
Since Controller upgrades to 4.4.3 from 4.3.x or earlier would use MySQL 5.5, it is important that you know what your Controller MySQL version
is. Please refer to Bundled MySQL Database Version to learn how to check your MySQL version and upgrade it if necessary.
Note that running a Controller on AWS requires that some of the cluster parameter group and db parameter group settings be adjusted. See Deploy the
Controller on AWS for more information.
If you attempt to upgrade or move a Controller migrated without its liquibase-stored procedures, the upgrade will fail. You must recreate
these stored procedures manually in AWS.
Before using mysqldump, first, ensure that the Controller app server is stopped. If you attempt to run mysqldump while the app server is
running, it will severely degrade the performance and stability of the Controller.
To use mysqldump, run the mysqldump executable, passing the root username, password, and output file:
cd <controller_home>/db/bin
3. In order to import the resulting file into Aurora, you need to replace the following line:
With:
Step 3: Use mysqldump to export stored procedures from the AppDynamics database
Run the following command to export the stored procedures from the AppDynamics database.
This command, through the --result-file option, dumps the stored procedures to /staging/path/for/mysql.proc.sql.
cd <controller_home>/db/bin
Note
The Aurora database is protected by security groups to prevent access from unauthorized sources.
imq.persist.jdbc.mysql.property.url=jdbc\:mysql\://<aurora-db>.<aws-region>.rds.amazonaws.com\:3388
/controller
Add the following line to the same file, right before the javaagent option:
<jvm-options>-Dappdynamics.controller.use.global.datadir.query.for.disk.space.check=false</jvm-
options>
c. In the file <controller_home>/bin/controller_maintenance.xml, set the property db-host to the value of your Aurora
database:
d. In the file <controller_home>/bin/setup.xml, set the property db-host to the value of your Aurora database:
<property name="db-host"value="<aurora-db>.<aws-region>.rds.amazonaws.com"/>
<property name="db-port" value="3388"/>
4. Verify that the Controller is running successfully. The local MySQL database should be shut down, and you should see the migrated data in
Aurora, which can be verified via the Controller UI.
cd <controller_home>/appserver/glassfish/bin
./asadmin update-password-alias controller-db-password
cd <controller_home>/bin
./controller.sh stop-appserver
./controller.sh start-appserver
cd <controller_home>/appserver/glassfish/bin
./asadmin ping-connection-pool controller_mysql_pool
If your Controller instance is deployed on AWS and uses Aurora for its database, you can use the Enterprise Console CLI commands to either discover
and update or to just upgrade your Controller to the latest version. You cannot use the Enterprise Console UI, however, to perform the upgrade.
After backing up the Aurora database, use the upgrade method below that best meets your needs:
Discover and Upgrade - Use this method if you are not sure if you need to upgrade.
Upgrade - Use this method if you know you are using an older version and want to upgrade. For example, if you want to upgrade from 4.4.3 to the
latest.
You can also move the Controller to a new EC2 instance to meet updated performance requirements. For platform-agnostic upgrade instructions, see Upgr
ade the Controller Using the Enterprise Console.
The Enterprise Console upgrades the schema to the latest version. However, upgrading the Controller does not upgrade the Aurora DB server.
If your upgrade fails, you can resume by passing the flag useCheckpoint=true as an argument after --args.
3. Update the AMI after the job finishes.
If your upgrade fails, you can resume by passing the flag useCheckpoint=true as an argument after --args.
3.
1. Create a new Aurora instance, using the database snapshot you took earlier as the source.
2. Stop the upgraded Controller if it is still running:
bin/platform-admin.sh stop-controller-appserver
1. Terminate the EC2 instance hosting the current Controller. This should cause a new EC2 instance to be automatically provisioned using the AMI.
The auto-scaling group and launch configuration are defined with the AMI. Therefore, if the existing EC2 instance in the auto-scaling
group dies, it is automatically replaced with a new EC2 instance based on the same AMI.
This page provides a high-level view of using the Enterprise Console to install the AppDynamics platform, including the Controller and Events Service.
See Discovery and Upgrade Quick Start for the upgrade quick start guide.
This quick start is intended to introduce you to the Enterprise Console and AppDynamics platform. Before you start, review the requirements pages for the
platform components.
Linux
./platform-setup-64bit-linux.sh
Windows
platform-setup-64bit-windows.exe
When the installation wizard launches, complete it to install the Enterprise Console. For more information about how to install the Enterprise
Console, see Install the Enterprise Console.
2. Verify that the Enterprise Console successfully installed by opening the GUI using the host and port you specified during installation:
http(s)://<hostname>:<port>
You can install the Events Service on a separate host directly by selecting a Custom Installation at step 2.
http(s)://<hostname>:<port>
1. In the Enterprise Console UI home page, click on the Platform you created previously.
2. Click Hosts in the left navigation menu.
3. Add a least three hosts for a scaled-up Events Service, providing a host name and an SSH credential that enables the Enterprise Console to
access the host.
4. When finished, go to the Events Service page.
5. From the more menu ( ... ), choose Scale Up Events Service.
6. In the Scale Up Events Service dialog, choose the Prod profile and select the hosts you created previously from the Host drop-down menu.
7. Click Submit.
8. Now set up a load balancer for the Events Service hosts, as described in Load Balance Events Service Traffic.
9. When done configuring the load balancer, you need to indicate to the Controller the addressable URL for the Events Service. In the Controller
Administration Console, set these Controller properties:
Set appdynamics.non.eum.events.use.on.premise.events.service to true.
Set appdynamics.analytics.server.store.url to the URL of the Events Service as exposed at the load balancer. For example,
appdynamics.analytics.server.store.url=http://es-master-host:9080/.
Before proceeding, see Set Up a High Availability Deployment. Specifically, follow the steps indicated in the section on configuring Controller High
Availability pair environment in for prerequisites.
1. In the Enterprise Console UI home page, click on the Platform you created previously.
2. Click Hosts in the left navigation menu.
3. Add a single new host. Provide a hostname and an SSH credential that enables the Enterprise Console to access the host.
4. When finished, go to the Controller page.
5. Click Add Secondary Controller.
6. Provide the same passwords as the primary database and Controller root password, and click Submit.
Now you have two Controllers running in passive-active high availability mode. To complete the configuration, you need to set up a load balancer, so that
you can easily switch traffic between the Controllers. See Use a Reverse Proxy. After configuring a load balancer, you would need to specify the URL
address for reaching the Controller pair in the Enterprise Console UI by clicking Configurations > Controller Settings > Appserver Configurations, and
specifying the URL in the External URL field.
You now have a functioning platform, consisting of Controller pair and a scaled up Events Service instance.
Related pages:
This quick start guide describes how to use the Enterprise Console to discover existing AppDynamics components, such as a Controller, and use it to
upgrade them.
Linux
./platform-setup-64bit-linux.sh
Windows
platform-setup-64bit-windows.exe
The installation wizard launches. Complete the wizard to install the Enterprise Console. For more information about how to install the application,
see Enterprise Console.
2. Verify that the Enterprise Console successfully installed by opening the GUI using the host and port you specified during installation:
http(s)://<hostname>:<port>
If you created a platform already, such as with the CLI, complete the following steps:
1. Identify the host(s) where the Controller and Events Service are installed.
2. On the Credentials page, add credentials to the platform if you are using remote hosts.
Remember to provide the private key file for the Enterprise Console machine when adding a credential.
<file path to the key file> is the private key for the Enterprise Console machine. The installation process deploys the keys to the hosts.
At a minimum, make sure to bootstrap all of the hosts where the AppDynamics server-side components, namely the Controller and Events
Service, are installed.
You may also use the loopback address '127.0.0.1' or the machine's actual hostname in place of 'localhost'.
Related pages:
The Enterprise Console is the installer for the Controller and Events Service. You can use it to install and manage the entire lifecycle of new or existing on-
premises AppDynamics Platforms and components. The application provides a GUI and command-line interface.
There is no customer-facing application for SaaS Controllers since they are managed by the AppDynamics Operations team.
If your Enterprise Console host goes down, it does not impact Controllers, Events Service, or High Availability (HA) pairs. Those services will continue to
run independently of the application. You can then discover all platforms on a new Enterprise Console host without any impact on the components.
If the Enterprise Console is not available, the auto-failover option for HA pairs will also not be available when there is an issue with the primary
Controller. Therefore, it is important to keep the Enterprise Console host in a healthy state.
Multi-Platform Management
The Enterprise Console does not require all services within a given platform to have the same major version number.
Discover, install, and upgrade Controllers, Event Services, and MySQL nodes
Note that all services on Windows machines must be installed on the Enterprise Console host when using the Enterprise Console
since the application does not support remote operations on Windows.
Manage HA pair lifecycle without the use of the CLI based HA-toolkit or sudo privileges
Perform failover management
Other Features
Express Install
Custom Install
Configure and customize user inputs and installation/data directories for the Controller and Events Service
Install or upgrade single or HA Controllers and scaled-up Events Service in a distributed setup
You can manually install or upgrade multi-node Events Service clusters. See Install the Events Service on Windows or Upgrade the
Events Service Manually.
Lifecycle Monitoring
On the Platforms page, you can see all of your platforms, their statuses, and the statuses of their services. Once you have selected a platform to view, the
screen is separated into different tabs:
Hosts
Hosts are the actual hardware devices that are connected to the platform. You can add, remove, or change the credentials of your hosts in this tab.
Controller
The Controllers page shows the primary and secondary roles of the Controllers and their MySQL nodes. The entire lifecycle operations of Controllers and
MySQL nodes can be performed here. You can also see the External URL, which is the IP of the primary machine. Health statuses for the Controllers are
also available. You can Add a Secondary Controller if you would like to create an HA pair, then initiate an HA failover if you want to trigger a failover. You
can also start or stop a Controller, Upgrade a Controller and MySQL, and more.
Events Service
The Events Service page displays your Events Service cluster, which can be made using one to three machines. Again there is an entire lifecycle of
operations you can do.
Credentials
Credentials are your host's usernames and private keys. They are required to SSH or connect to the hosts via system user name and private keys.
Jobs
All of the jobs that you perform on your platform can be seen on the Jobs page. It is a nice way to keep track of your jobs and also see which jobs have
failed.
Configurations
Configurations are important since they let you customize your installations. Configuration settings on the Enterprise Console are separated into three
categories: Platform, Controller, and Events Service Settings.
The Controller Settings contains the most configurable settings. The AppServer Configurations under Controller Settings allows you to see all of the
Domain configurations which you can initiate from this point or configure your ports. The Database Configurations lets you edit your MySQL settings. So
you do not have to tweak the machine, you can do everything from the Console itself.
You cannot use the Enterprise Console to install the End User Monitoring (EUM) Server. Instead, you must use a package installer that
supports interactive GUI or console modes, or a silent response file installation.
You can find the full On-Premises Deployment Architecture diagram on Application Performance Monitoring Platform, as well as a more detailed On-
Premises and SaaS architecture diagram on PDFs.
Platform 1
Platform 1 depicts a single Controller with a local Events Service and EUM Server. The local Events Service contains an
API Store.
Platform 2
Platform 2 depicts a single Controller with a remote, single host Events Service and EUM Server. The remote Events
Service contains an API Store and can be expanded to a cluster by adding two or more machines.
Platform 3
Platform 3 depicts an HA Controller pair with a remote Events Service cluster and EUM Server. The Events Service
cluster contains an API Store on all nodes. The cluster must have three or more nodes.
Platform 4
Platform 4 depicts a single monitoring Controller. This Controller monitors the HA pair in platform 3 by receiving metrics via
connection from the App and Machine Agents. See Manage a High Availability Deployment for more information.
Platform 5
Platform 5 depicts a single shared Events Service. A shared Events Service can connect to multiple Controllers from other
platforms, minimizing required maintenance and cost. See Events Service Deployment for more information.
Related pages:
The Enterprise Console can run on the same host as the Controller and the embedded Events Service. If this is the case, the machine you choose to run
the Enterprise Console must meet the requirements for all the components that run on that machine.
However, we recommend that you place the Enterprise Console on its own separate dedicated host, particularly if you deploy Controllers as High
Availability pairs.
The Enterprise Console UI has been tested with and supports the last two versions of these browsers:
Safari
Chrome
Firefox
Microsoft Edge
Internet Explorer
Certain types of ad blockers can interfere with features in the Enterprise Console UI. We recommend disabling adblockers while using the Enterprise
Console UI.
CPU Requirements
The Enterprise Console is not CPU intensive and therefore can manage multiple platforms with two Cores.
To access remote hosts, the Enterprise Console uses Java Secure Channel (JSch) API with the provided key file. The Enterprise Console does
not support SSH Jump Server. If you use an SSH Jump Server, or have jump host configuration, please contact your AppDynamics
representative for deployment options.
cURL
You must install cURL on systems that run Linux.
libaio
numactl package, which includes libnuma.so.1 for RHEL, CentOS, and Fedora, and libnuma1 for Ubuntu and Debian
glibc2.12
This glibc version is included into a given operating system release, and therefore cannot be updated.
tzdata for RHEL, CentOS, Fedora, , openSUSE Leap 12 and Leap 15, and Ubuntu version 16 and higher
libncurses5 (and above) for Ubuntu, CentOS, Debian, openSUSE Leap 12 and Leap 15, and Amazon Linux 2
This table provides instructions on how to install the libraries on some common flavors of the Linux operating system.
If you cannot install the library, check that you have a supported version of your Linux flavor.
Linux Command
Flavor
Ensure that only one package mgr for rpm and is installed before running the Enterprise Console installer.
For RHEL8, CentOS8, and Amazon2 you can either manually install version 5 of ncurses or use version 6.
The ncurses-libs depends on the ncurses-base so you must install the ncurses-base first. These are exa
mples of a trusted source for rpm download:
http://mirror.centos.org/centos/7/os/x86_64/Packages/ncurses-base-5.9-14.20130511.el7_4.noarch.rpm
http://mirror.centos.org/centos/7/os/x86_64/Packages/ncurses-libs-5.9-14.20130511.el7_4.x86_64.rpm
You must either create symlinks for ncurses-libs-5 which points to ncurses-libs-6, or install the ncurses-compat-
libs package, to provide ABI version 5 compatibility.
RHEL8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5
CentOS8 symlink:
sudo ln /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5
sudo ln /usr/lib64/libncurses.so.6.1 /usr/lib64/libncurses.so.5
Amazon2 symlink:
sudo ln -s /usr/lib64/libncurses.so.6.0 /usr/lib64/libncurses.so.5
sudo ln -s /usr/lib64/libtinfo.so.6.0 /usr/lib64/libtinfo.so.5
RHEL8 compat-libs:
sudo yum install -y ncurses-compat-libs
CentOS8 compat-libs:
Amazon2 compat-libs:
sudo yum install -y ncurses-compat-libs
Ensure that only one package mgr between dpkg and rpm is installed before running the Enterprise Console installer. This
pkg manager utility will be used to verify mandatory pkgs before the Enterprise Console installation.
Note: For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6.
Debian Use a package manager (such as APT) to install the library (as previously described in the Ubuntu instructions).
Note: For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6.
Ensure that only one package mgr for rpm and is installed before running the Enterprise Console installer. Also, you need
to add the openSUSE machine repository before installing the tzdata package.
You may run into file conflicts when two packages attempt to install files with the
same name but different contents. If you choose to continue, the old files and their
contents will be replaced.
Supported SSH Key Exchanges, Cipher Algorithms, MAC and Host Key Type
You can use these ssh key exchanges, cipher algorithms, MAC types, and host key types to customize the ssh configuration on your host(s):
Key Exchanges
diffie-hellman-group-exchange-sha1
diffie-hellman-group1-sha1
diffie-hellman-group14-sha1
diffie-hellman-group-exchange-sha256
ecdh-sha2-nistp256
ecdh-sha2-nistp384
ecdh-sha2-nistp521
Cipher Algorithms
blowfish-cbc
3des-cbc
aes128-cbc
aes192-cbc
aes256-cbc
aes128-ctr
aes192-ctr
aes256-ctr
3des-ctr
arcfour
arcfour128
arcfour256
MAC Type
hmac-md5
hmac-sha1
hmac-md5-96
hmac-sha1-96
This page provides information and instructions for installing the AppDynamics Enterprise Console to automate the task of installing and administering the
Controller and Events Service. You must install the Enterprise Console to install these components.
The Enterprise Console and Controller must run on separate MySQL instances allowing the Enterprise Console to manage the Controller's instance
independent of the Controller host, creating a lightweight setup that consumes less memory.
If you install the Enterprise Console on the Controller machine, it must be in a different directory in order to keep data separate. For instance, if
the Controller is installed in /opt/appdynamics/controller, the Enterprise Console might be in /opt/appdynamics
/enterpriseconsole.
You must also avoid port conflicts with the Controller database, which is 3388 by default, whereas the Enterprise Console database is 3377 by default.
The Enterprise Console installation path you choose must be writeable, i.e. the user who installed the Enterprise Console should have write permissions to
that directory.
The Enterprise Console prevents multiple users from running commands at the same time. If a second user attempts to run a command while another
command is in progress, the second command is not completed and an error message appears indicating that another command is in progress. To avoid
such conflicts, the Enterprise Console should generally be used by a single user at a time.
You can enable HTTPS for the Enterprise Console during installation. See HTTPS Support for the Enterprise Console.
Cross-platform (OS) installation, e.g., installing the Enterprise Console on Linux and the Controller on Mac or Windows is not supported.
Software Requirements
On systems that run Linux, you must have cURL and netstat installed. Linux systems must also have the libaio library installed. This library provides
for asynchronous I/O operations on the system.
See Required Libraries for how to install libaio and other libraries on some common flavors of the Linux operating system.
Password Requirements
Database Root mysqlRootU The password of the user account that the Controller uses to access its MySQL database.
User's Password serPassword
Do not use the single quotation mark ('), double quotation mark ("), or at sign (@) characters in this password.
Controller root rootUserPa The Controller root user password. The root user is a Controller user account with privileges for accessing the syste
User's Password ssword m Administration Console.
This password is used for the admin user of the built-in Glassfish application server as well. The Glassfish admin
user lets you access the Glassfish console and the asadmin utility. See Access the Administration Console.
Allowed characters in the password are: a-z, A-Z, 0-9, ., +, =, @, _, -, $, :, #, ,, (, ), !, {, }
User Name userName The username of the administrator account in the Controller UI. This is the administrator for the built-in account if
(Admin User single-tenant systems, or for the initial account for multi-tenant. See Update the Root User and Glassfish Admin
Setup) Passwords.
Usernames and passwords cannot include the @ or ! character.
Also note that if this account will be used to access the REST API, additional limitations on the use of special
characters in usernames apply. See Create and Manage Tenant Users.
In the password field, you may include the space character at the beginning as well as in the middle of the password string. However, you cannot start
passwords with the space character when using the response.varfile.
GUI Installation
Before starting, get the Enterprise Console installer version appropriate for your target system. You can get the installer from the AppDynamics Downloads.
When ready, follow these steps to install the Enterprise Console:
Linux
./platform-setup-64bit-linux.sh
Windows
platform-setup-64bit-windows.exe
It is recommended that you right-click the .exe file and select Run as Administrator.
3. After the GUI launches, use it to complete the installation. In Linux, you may also follow the steps in the installation wizard to complete the
console installation.
If you install the Enterprise Console on AWS, use the public DNS for the Enterprise Console hostname when prompted.
Silent Installation
To use the silent installation method, add the -q option, the response file, and the destination directory to the command to run the installer. For example, in
Linux, run the following command:
It is recommended that, if possible, you provide an absolute path as the installation path specified as the dir argument value and not a relative path as
shown in the example.
serverHostName=HOST_NAME
sys.languageId=en
disableEULA=true
platformAdmin.port=9191
platformAdmin.databasePort=3377
platformAdmin.dataDir=/opt/appdynamics/platform/mysql/data
platformAdmin.databasePassword=ENTER_PASSWORD
platformAdmin.databaseRootPassword=ENTER_PASSWORD
platformAdmin.adminPassword=ENTER_PASSWORD
platformAdmin.useHttps$Boolean=false
sys.installationDir=/opt/appdynamics/platform
The sys.languageID and platformAdmin.dataDir properties are optional. If not specified, the data directory will be in the /mysql directory
under the platform directory.
Windows
serverHostName=HOST_NAME
sys.languageId=en
disableEULA=true
sys.adminRights$Boolean=true
platformAdmin.port=9191
platformAdmin.databasePort=3377
platformAdmin.dataDir=C\:\\AppDynamics\\Platform\\platform-admin\\mysql\\data
platformAdmin.databasePassword=ENTER_PASSWORD
platformAdmin.databaseRootPassword=ENTER_PASSWORD
platformAdmin.adminPassword=ENTER_PASSWORD
platformAdmin.useHttps$Boolean=false
sys.installationDir=C\:\\AppDynamics\\Platform
The sys.languageID and platformAdmin.dataDir properties are optional. If not specified, the data directory will be in the \mysql directory
under the platform directory.
If you install the Enterprise Console on AWS, use the public DNS for the serverHostName value.
GUI: A graphical interface within a web browser to install the Controller and Events Service. You can select from Express Install or Custom Install
of the platform, which includes the option to install a Controller and Events Service.
Command-line: A CLI to install the Controller and Events Service.
After installing the Enterprise Console, you can select from the Express Install or Custom Install of the platform, which includes the option to install a
Controller. For more information about those options, see Enterprise Console.
For information on installing the Controller or Events Service in unattended mode or via the command line, see Enterprise Console Command Line.
http(s)://<hostname>:<port>
Specify the port and hostname you used when you installed the Enterprise Console. The default port is 9191. This port needs to be exposed from your
firewall rules so you can access the port from any place. See Port Settings.
For example:
http(s)://aHost.aDomain:9191
With the GUI, you can install and manage the components of the AppDynamics platform, including tasks such as adding hosts or credentials, installing a
Controller, and monitoring jobs.
If you cannot access the GUI, verify that the hostname and port number are correct. Additionally, ensure that the Enterprise Console is running.
The first time you access the GUI, the Enterprise Console shows the following options for installing the AppDynamics Platform:
Express: Select this option for new installations of the Controller and Events Service. The services are installed on the same host.
Custom: Select this option to customize your installation, including installing or upgrading the Controller and Events Service on separate
hosts. By installing the Events Service on a separate host, you can create a 1 or 3+ node Events Service based on your needs. Installing
an Events Service on a separate host with the Enterprise Console is only supported on Linux. If you want to install the Events Service on
a separate host on Windows, see Install the Events Service on Windows.
The Events Service is installed by default with a Custom Installation unless you choose to unselect the Install Events Service
option.
Discover and Upgrade: When performing a custom installation, you have the option to discover and upgrade an existing
AppDynamics deployment, such as a Controller or Events Service. For example, if you use the package installer to install the
Controller in a previous version of AppDynamics, you can use the discover and upgrade option to add the Controller to the
AppDynamics platform that the Enterprise Console manages. The application will then upgrade the Controller to the same
version of the Enterprise Console. Verify that the Controller and MySQL are running before you attempt to discover and
upgrade them.
./platform-setup-64bit-linux.sh -c -VdisableEULA=true
<prefer><family>Utopia</family></prefer>
</alias>
<alias>
<family>sans-serif</family>
<prefer><family>Utopia</family></prefer>
</alias>
<alias>
<family>monospace</family>
<prefer><family>Utopia</family></prefer>
</alias>
<alias>
<family>dialog</family>
<prefer><family>Utopia</family></prefer>
</alias>
<alias>
<family>dialoginput</family>
<prefer><family>Utopia</family></prefer>
</alias>
</fontconfig>
Related pages:
You can choose Express Install on the Install page when you first use the Enterprise Console or when you want to create a new platform. This install
option provides you with a quick and simple way to install a fresh single node Controller and embedded Events Service.
This page guides you through the steps to perform an Express Install. Before you install the Controller with the Enterprise Console, verify that the
Enterprise Console is running and the host machine meets the requirements for the Controller.
If you use the GUI to install the Controller, you can create the platform at the same time. If you use the command line, you must create the platform prior to
installing the Controller. For more information, see Administer the Enterprise Console. Express Install does not give you the option to choose the version
you would like to install, and instead automatically installs the latest version of each service. If you would like more control over your installation, see Custo
m Install.
After you install the Enterprise Console, you can complete the platform installation process with the GUI:
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
The Installation Path is an absolute path under which all of the platform components are installed. The same path is used for all hosts
added to the platform. Use a path which does not have any existing AppDynamics components (Controller, Events Service, etc.)
installed under it.
The path you choose must be writeable, i.e. the user who installed the Enterprise Console should have write permissions to that folder.
Also, the same path should be writeable on all of the hosts that the Enterprise Console manages.
5. Add a host by entering host machine-related information: Host Name, Username, and Private Key. To add the Enterprise Console host, click Add
Enterprise Console Host, which will automatically populate the text field with the hostname of the Enterprise Console machine. You do not need
to provide a credential. The Controller and Events Service will be installed on the same host as the Enterprise Console.
If the Controller is to be installed on a Windows machine, the Enterprise Console should be on the same machine. This is because
Windows hosts are not supported on the Enterprise Console.
If you use the Enterprise Console host to install the Controller, then no Username or Private Key is required. You can also install the
Controller and Events Service on a remote host. In that case, the Username and Private Key of the remote host would be required.
As of OpenSSH version 7.8, the ssh keys generated are in OpenSSH format using Ed25519. However, the Enterprise Console
expects the ssh keys to be formatted using the older pem format.
See Install the Events Service on Linux for additional setting requirements and to learn how to scale up an embedded Events Service.
Related pages:
The Controller is the central component of the AppDynamics platform. Other components such as the Events Service connect to the Controller and stream
metrics to be displayed. The Enterprise Console Custom Install option provides you with a configurable way to install a fresh Controller and Events
Service. You can also Discover & Upgrade older platform services using Custom Install.
This page describes how to use Custom Install to install the Controller. Before you install the Controller with the Enterprise Console, verify that the
Enterprise Console is running and the host machine meets the requirements for the Controller.
If you use the GUI to install the Controller, you can create the platform at the same time. If you use the command line, you must create the platform prior to
installing the Controller. For more information, see Administer the Enterprise Console. If instead, you would like to get started with your platform as soon as
possible, see Express Install.
After you install the Enterprise Console, you can complete the platform installation process with the GUI:
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
The Installation Path is an absolute path under which all of the platform components are installed. The same path is used for all hosts
added to the platform. Use a path which does not have any existing AppDynamics components installed under it.
The path you choose must be writeable, i.e. the user who installed the Enterprise Console should have write permissions to that folder.
Also, the same path should be writable on all of the hosts that the Enterprise Console manages.
5. Add a host by entering host machine-related information: Host Name, Username, and Private Key. This is where the Controller and Events
Service will be installed onto.
As of OpenSSH version 7.8, the ssh keys generated are in OpenSSH format using Ed25519. However, the Enterprise Console
expects the ssh keys to be formatted using the older pem format.
For more information about how to add credentials and hosts, see Administer the Enterprise Console.
If the Controller is to be installed on a Windows machine, the Enterprise Console should be on the same machine. This is because
Windows hosts are not supported on the Enterprise Console.
6. Install Controller:
a. Select Install.
b. Select an available Target Version from the dropdown list.
The list is populated by versions that the Enterprise Console is aware of. This means that you can install the Controller to any
intermediate version or to the latest version as long as the Enterprise Console installer has been run for those versions.
c. Select a Profile size for your Controller. See Controller System Requirements for more information on the sizing requirements.
d. Enter the Controller Primary Host.
e. Enter the Controller Secondary Host if you would like to install an HA pair.
It is highly recommended that you have the Enterprise Console on its own separate dedicated host, especially in the case of
HA installations. This means you would need three hosts for the recommended HA setup. If the Enterprise Console and
Controller are on the same host and that host becomes unavailable, the Enterprise Console will not be able to failover to the
other Controller.
You can set up an HA pair at a later time. See Set Up a High Availability Deployment for additional setting requirements and
to learn how to add a secondary controller after your initial installation.
If you do not install a Controller at this time, you can always do so later by navigating to the Controller page in the GUI and
clicking Install Controller.
You do not need to specify the installation or data directory for the Events Service installation. If you do, use a different one
from the platform or mysql data directory.
d. Optional: Enter the Elastic Search, REST API Admin, REST API, and Unicast Ports.
e. Enter the Events Service Host. You can deploy a single Events Service node or an Events Service cluster made up of three or more
nodes. The minimum size of an Events Service cluster is three. You can always add more at a later time on the Events Service page.
You can set up a scaled up Events Service at a later time. See Install the Events Service on Linux for additional setting
requirements and to learn how to scale up an embedded Events Service.
If you do not install an Events Service at that time, you can always do so later by navigating to the Events Service page in
the GUI and clicking Install Events Service.
8. Click Install.
After clicking Install you can monitor the status of your platform creation jobs on the Jobs page, which include Add Hosts, Controller Install, and Events
Service Install Jobs. Note that the Controller Install Job takes a considerably longer time to complete than the other two jobs. When the jobs successfully
complete, you can check the status of your platform, obtain the URL of the Controller, update platform configurations, and manage the lifecycle of your
services.
Once any high availability controllers are installed, they can be found under the Controller page. All high availability lifecycle operations such as start, stop,
upgrade, and failover can be performed from this page.
You can use the GUI or command line to perform the following platform administration tasks with the Enterprise Console:
Some tasks may not be available through both the GUI and command line. Most of the commands described here are based on the Linux
command line and cover common tasks for managing the platform. You can run similar commands with the Windows command prompt by
replacing the platform-admin.sh script with platform-admin.exe cli.
Run the commands from the <Enterprise Console installation directory>/platform-admin directory. This page contains the minimum
options and parameters required to run a command.
Some commands may have more options and parameters. To see these additional options, run the command with -h specified. For example, run the
following command to see all the options and parameters for the create a platform command:
The same user who installed the Enterprise Console should be the same user who starts or stops the Enterprise Console.
Manage Platforms
The platform is the collection of AppDynamics components and their hosts. The Enterprise Console supports up to 20 platforms at a time by default.
Create a Platform
To use the Enterprise Console for end-to-end installation and management, you must first create a platform. The Enterprise Console creates the platform
when you complete the Express or Custom installations or discover existing components in the GUI.
To use the command line to create a platform, run the following command:
The platform installation directory is the absolute directory where the Enterprise Console installs all AppDynamics components on all of its hosts. Once you
add a host to the platform, you can no longer change the directory. Additionally, the directory cannot contain a space.
Delete a Platform
You can use the Enterprise Console to delete a platform that is no longer in use. You may also want to consider editing the Platform's configuration
instead. You can perform either action on the Platform view page of the GUI.
To use the command line to delete a platform, run the following command:
If you just deleted your current platform, you must clear the value of the APPD_CURRENT_PLATFORM variable to prevent unexpected errors
when running future commands.
ls ~/.ssh/
id_rsa
id_rsa.pub
Add Credential
When you add a credential, you need the following information:
Credential name
Username
Private key file
The credential name is the unique identifier for a credential and is used to specify the credential when you perform tasks such as adding a host.
AppDynamics recommends that you follow the naming convention for all of your credential names. The id_rsa, RSA private key, should be created using
the OpenSSL PEM encoding format over the Open SSH standard encoding.
The Unix system user specified in the username field must have writeable access to the platform directory.
Where <file path to the key file> is the private key for the Enterprise Console machine. The installation process deploys the keys to the hosts.
Remove Credential
Remove a credential that is no longer used. You cannot remove a credential that is still used by a host. You can remove a credential in the GUI by
selecting the credential and clicking Delete.
Manage Hosts
The hosts are the machines used to run AppDynamics components such as the Controller and Events Service. For example, the Events Service can run
on the same host as the Controller, a single host, or a cluster of three or more hosts.
You must properly configure the credential you use to add a new host on a remote host. This means that for a private key that you have specified, you
must add the corresponding public key to the remote host ~/.ssh/authorized_hosts file.
The Controller and Events Service must reside on the same local network and communicate by internal network. Do not deploy the cluster to nodes on
different networks, whether relative to each other or to the Controller where the Enterprise Console runs. When identifying cluster hosts in the
configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name.
For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-
129-89.us-west-2.compute.amazonaws.com. You must then go to the Appserver Configurations under Controller Settings in the Enterprise Console
GUI, and edit the external URL so you can access the page.
The host that runs the Enterprise Console is automatically created and added to the platform as the hostname of the Enterprise Console machine if you
use the GUI to install or discover components. If you do not use the GUI, you must manually add this host.
Hosts can be managed through the Enterprise Console GUI on the Hosts page or the command line.
The id_rsa, RSA private key, should be created using the OpenSSL PEM encoding format over the Open SSH standard encoding.
2. Identify the Enterprise Console Linux AppDynamics user's public/private key pair (usually in ~/.ssh).
3. Add the Enterprise Console Linux AppDynamics user's public key (~/.ssh/id_rsa.pub) into the remote Controller server Linux user's ~/.ssh
/authorized_keys.
5. Test the SSH connection from the Enterprise Console server with the following:
6. Verify that the remote hostname is printed. You may have to first answer yes to trust the server fingerprint.
7. Access the Enterprise Console UI Credential page, and add or edit the existing credentials.
8. Create a single credential, which will likely be the same for all remote hosts, with a name like: EC-<ec linux user name>-<remote appd
user name> For example, EC-ecappduser-appduser , or EC-appdyn if the username is the same on the Enterprise Console and remote server.
9. Enter the remote server Linux username. It might be the same as the local Enterprise Console AppDynamics user.
10. Supply the Enterprise Console Linux user's ~/.ssh/id_rsa contents as the private key as chosen in step 2.
You should also check that the user ID created matches those in the credentials. Additionally, the path you specify for the platform base
directory must exist.
Instead of listing the hosts with --hosts, you can specify a text file with a line-separated list using the following command:
bin/platform-admin.sh add-hosts --host-file <file path to host file> --credential <credential name>
If you do not use the GUI, you must add the host for the Enterprise Console. This is also the host used by the Controller and embedded Events Service.
The host is named "localhost" and does not require credentials. For example, run the following command:
You may also use the loopback address '127.0.0.1' or the machine's actual hostname.
Remove Hosts
Before you remove a host, ensure that you remove all AppDynamics components from the host. You can remove a host in the GUI by selecting the host
and clicking Remove.
Instead of listing the hosts with --hosts, you can specify a text file with a line-separated list using the following command:
If a host becomes unreachable, you can use the following command to remove it:
This removes the host and all of its associated metadata from the Enterprise Console database.
bin/platform-admin.sh stop-platform-admin
bin/platform-admim.sh stop-platform-admin
This is to stop the MySQL DB and start it without using the --skip-grant-tables option that is in the next step.
bin/platform-admin.sh start-platform-admin
9.
Use the application ID that owns the AppDynamics platform installation when running the commands.
4. Execute the following queries, replacing <new_password_here> before executing the query.
5. Verify the login by running the following command in the <EC_home>/Platform/mysql/bin directory:
./platform-admin.sh stop-platform-admin
13. Start the Enterprise Console by running the following command in the <EC_home>/Platform/platform-admin/bin directory:
./platform-admin.sh start-platform-admin
After you change the installation directory, you must add hosts and reinstall AppDynamics components.
http://econsole-host:9191/service/version
The purge clean up script (purge.sh) is available from Enterprise Console version 4.5.17 or later.
To remove unused artifacts from a prior Enterprise Console installation, you can run a purge clean up script (purge.sh). The purge.sh script preserves
any artifacts required to run and upgrade the components in your existing platform.
The purge clean up script must be run with the same user who installed the Enterprise Console.
Verify that there are no jobs currently running from the Enterprise Console.
1. Use the plan command to show the purge script execution plan.
<platform-admin>/pa-purger/purge.sh plan
The execution plan shows you the exact commands to run when you use the apply command.
2. Use the apply command to run and apply the commands of the purge script execution plan.
<platform-admin>/pa-purger/purge.sh apply
Old JREs that are not being used by any product services (Events Service or Controller) are cleared during the implementation of job upgrade-
orcha which is ran when the Enterprise Console is upgraded. You can use the upgrade-orcha job on the adhoc basis to clean up old JREs.
#linux
./platform-admin.sh upgrade-orcha
#windows
platform-admin cli upgrade-orcha
The Enterprise Console command line utility allows you to perform orchestration tasks in an automated way. It is designed with the following limitations in
mind:
There is not a complete match of all the functionalities provided by the web UI.
The command line utility is only supported to run on the host where the Enterprise Console is installed.
To see the operations available for the Enterprise Console, from the command line, navigate to directory and run the script with -h specified:
And you can view the format for each command in the command line by specifying the -h argument for a specific command:
You can also use bin/platform-admin.sh list-jobs --service <controller or events-service> to see a list of jobs
available for the provided service. You can then see what parameters are required for the provided job using bin/platform-admin.sh
list-job-parameters --job <job_name> --service <controller or events-service>.
Not all commands available on Linux are available on Windows. Refer to the list of the Enterprise Console commands displayed with the -h
parameter.
The Enterprise Console prevents multiple users from running commands at the same time. If a second user attempts to run a command while another
command is in progress, the second command is not completed and an error message appears indicating that another command is in progress. To avoid
such conflicts, the Enterprise Console should generally be used by a single user at a time.
logout
reset-password
If you forget your admin password or run into a 401 error, run this command to reset your password to its default value, admin. You will need to
log out then log back in for this change to take effect.
change-password --user-name <username> --password <old_password> --new-password <new_password>
start-platform-admin
The Enterprise Console must be running to install or administer Events Service nodes.
If you happen to provide the --platform-name parameter while APPD_CURRENT_PLATFORM is set, the value passed through the flag will override the
environment variable.
Configurations are important since they let you customize your installations. The Enterprise Console enables you to configure these settings via GUI and
CLI. However, note that there is limited support for updating service configurations through the CLI. Therefore, it is recommended that you use the GUI for
updating configurations, especially for multi-line values.
Configuration settings on the Enterprise Console are separated into three categories: Platform, Controller, and Events Service Settings.
Platform Properties
The Platform configuration allows you to update platform description in the UI. Updating the Platform path is only allowed in the CLI.
Controller Settings
The Controller Settings pages allow you to tune your controller. You can configure settings such as database configurations, JVM options, listeners, and
thread pools, for both single or high availability controllers.
Appserver Configurations
The AppServer Configurations page under Controller Settings allows you to edit most of the domain.xml configurations. You can also change the ports and
update the controller from a smaller to a higher profile. The configurations are categorized under Basic, JVM Options, and SSL Certificate Management:
Basic
Profile: Demo, small, medium, or large
You can change the Controller profile from a smaller profile to a larger one. Before doing so, ensure that the host machine meets the
requirements for the profile size you want to use.
The Enterprise Console checks the disk size for the transaction log dir and db data dir for medium and large profiles only. If the
transaction log is in a separate mount, then it will check for half of the minimum recommended disk size.
This process is not reversible, and you cannot move from a larger to a smaller profile size. If you tune the Controller heap settings or database
configuration settings, even to be greater than the recommended settings for the new profile, those settings will be preserved. Otherwise, the
AppDynamics recommended settings are applied.
To increase the Controller profile size, navigate to AppServer Configurations by choosing the platform, Configurations, Controller Settings, and
Appserver Configurations. At the top of the page, select a new profile, then click Save.
Alternatively, you can also use the CLI to increase the Controller profile size to meet increased demand:
After a fresh installation or upgrade, the database user password will be hidden in domain.xml in the Appserver directory as an alias.
Alternatively, you can also use the CLI to change your database user password:
Advanced configurations
NUMA Controller Configuration: This setting is preserved upon upgrades.
NUMA Database Configuration: This setting is preserved upon upgrades.
JVM Options
JVM Options: You can update the JVM options via this page without having to use the modifyJvmOptions utility or any other external scripts.
Domain Http Services
Domain Protocols
Domain Network Listeners
Domain Transports
Domain Thread Pools
You can also update the domain config settings using the CLI by following the steps below:
1. Download all four configurations to individual files. See Deploy the Controller on AWS for more information on the config file.
2. Create and load four variables:
new_network=`cat domain-network-listeners.txt`
new_protocol=`cat domain-protocols.txt`
new_thread=`cat domain-thread-pools.txt`
new_transports=`cat domain-transports.txt`
3. Go to platform-admin/bin and log in.
cd platform-admin/bin
If your RAM memory is greater than 200 GB and you are using a NUMA based architecture, you can specify the Linux nodes (typically CPU socket
numbers) from which both processes and memory will be allocated for each AppDynamics component. For example on a two-socket motherboard,
AppDynamics recommends the following node configuration settings:
For the node configuration settings, you can enter an integer or comma separated list of integers. For example, for Glassfish, you can enter 0 or
0,1; for MySQL, you can enter 1 or 2,3.
DB Configuration Settings
Data Directory: You can change the datadir path and database port via this page.
You cannot change certain configurations, such as the MySQL root directory, through the Enterprise Console.
DB Root Password
The Enterprise Console does not allow you to change the MySQL root password. However, if you change the MySQL root password for the
Controller, you should update the database root password in the Database Configurations page so that the Enterprise Console is aware of the
new password.
If you are using High Availability Toolkit (HATK), you must manually apply these settings on the secondary Controller and restart the
secondary server.
These instructions are specific to the UNIX operating system.
1. Copy the db.cnf file from your primary Controller host onto the Enterprise Console host, for example db.cnf.new file.
2. Edit the db.cnf.new file to add new settings or update existing values.
3. Load the db.cnf.new file into an environment variable:
new_db_cnf=`cat db.cnf.new`
You can also view and edit the SSL Certificate here.
cd platform-admin/bin
cd platform-admin/bin
3. Open controller-configs.conf, and copy all JVM options. Then paste them into a separate file, and edit the desired parameters.
4. Run the following command on the Enterprise Console host:
For example:
You can also remove the Controller from the Enterprise Console and rediscover it to preserve the configuration changes:
cd platform-admin/bin
2. On the Controller page, click on Remove Controller, or run the following commands on the Enterprise Console host:
If removeBinaries=false then the Enterprise Console forgets the Controller without impacting or uninstalling the Controller.
3. Discover the Controller by using the Discover & Upgrade feature as if you were upgrading the Controller using the Enterprise Console, or run the
following command on the Enterprise Console host:
You must specify and provide the full path to the existing Controller directory.
The Enterprise Console saves all jobs that you perform on the application on the Jobs page.
You can view additional job details by clicking View Details in the Status column of the job.
If the job you are viewing failed, then you can see what the error was that caused it to fail, and retry it at its last checkpoint.
platform-admin-server.log: Information about events of the install process such as extraction, preparation, and other post-processing tasks. It is
located at <platform_admin_home>/logs.
server.log: Information for the embedded Glassfish application server used by the Controller. It is located at <controller_home>/logs.
audit.log: Information about Account/User/Group/Role CRUD/User login/logout/SSH connections, and all other operations. This can be used to
forward auditable events from the AppDynamics controller into a central log management system or SIEM. It is located at <controller_home>
/logs, and replicates the Controller Audit Report.
database.log: Information for the MySQL database that is used by the Controller. It is located at <controller_home>/db/logs.
startAS.log: Output generated by the underlying Glassfish domain for the Controller.
orcha-modules.log: Information about all the tasks and commands that are run during installation. platform-admin-server.log has high-level
information about the task executed. However, orcha-modules.log has more information on the specific command and output. It is located at <
platform-dir>/orcha/<version>/orcha-modules/logs/orcha-modules.log.
You can also run the following command on the Enterprise Console host:
Log Rotation
The application server is preconfigured to rotate the server.log file regularly, based on settings in the domain configuration file. On Windows, there is an
additional log file for Glassfish service launcher that needs to be rotated.
For the other log files, such as database.log or audit.log, you may need to set up log rotation to prevent them from consuming excessive disk space. The
audit.log file, in particular, may grow quickly because it contains SSH connections to Controller and Event Services hosts that occur every minute. You also
need to set up log rotation for an additional log file, <controller_home>/db/data/slow.log. This log contains information about slow MySQL
queries.
The tool you use to perform the rotation depends on your operating system. On Linux, you can use the mysql-log-rotate script. The script is included
with the Controller database installation at <controller_home>/db/support-files. You need to modify the script for your environment since it is not
set up to rotate the database.log file by default. On other systems, you need to create or install a script that performs log rotation and make sure that it get
run regularly, for example, by cron or an equivalent task scheduler.
Retention Period
Enterprise Console logs in platfom-admin/logs are automatically archived in a .gz format with a date-time stamp after they grow too large. The size of
each archived log is around 300 KB. Only the latest seven files are archived in the logs directory.
If you want to retain the archived logs for longer periods of time, you can configure the settings in the Dropwizard configuration file, PlatformAdminAppli
cation.yml. Under the logging/appenders/type: file section, you need to specify a larger archivedFileCount, as well as maxFileSize. See the Drop
wizard Configuration Reference for more information.
${com.sun.aas.instanceRoot}/../../../../logs/server.log
If you specify a directory that does not exist, it is created when you restart the application server.
4. Change the database.log location by opening the <controller_home>/db/db.cnf file. You can also change the db.cnf file from the
Enterprise Console GUI Database Configurations page. Doing so also restarts the database server.
The Enterprise Console takes a backup when you modify these configurations.
5. Set the value of the log-error property to the new location of the database.log file. This directory location must exist before you restart the
Controller or you will get start-up errors.
6. From either Linux or Windows, change the log location to the new location:
Linux File
<controller_home>/bin/controller.sh
nohup ./asadmin start-domain domain1 > $INSTALL_DIR/logs/startAS.log
Windows File
<controller_home>/bin/controller.bat
call asadmin.bat start-domain domain1 > %INSTALL_DIR%\logs\startAS.log
You can set the logging level by Controller component, which include:
Agents
Business Transactions (BTS)
Events
Incidents
Information Points (IPS)
Metrics
Orchestration
Rules
Snapshots
1.
The Enterprise Console keeps all data pertaining to its managed AppDynamics platform deployment in a MySQL database. To back up an Enterprise
Console installation, you use MySQL commands to export and restore data. You will also need to back up the Enterprise Console's secure credential store
file.
Backing up Enterprise Console data is a separate consideration from backing up a Controller. For more information on Controller backups, see
Controller Data and Backups.
In the event that your Enterprise Console host fails, follow these steps to ensure that you can recover.
This puts the export file into a named file in the /tmp directory. Choose another location, if appropriate.
2. Change to the mysql/bin directory of Enterprise Console:
cd <platform>/mysql/bin
4. Copy the Enterprise Console's secure credential store (SCS) file <platform>/platform_admin/.appd.scs to the same directory on your
backup Enterprise Console instance. Make sure you retain the same file permissions for the SCS file (644) on the backup instance of the
Enterprise Console.
Related pages:
To upgrade the Enterprise Console, you run the installer for the version of the application to which you want to upgrade on the Enterprise Console
machine. The installer detects the Enterprise Console installation and upgrades that instance.
Before Upgrading
Before you start upgrading the Enterprise Console and your platform, make sure that you are using the correct upgrade order.
The user upgrading the Enterprise Console should be the same user who installed the Enterprise Console.
Make sure your Enterprise Console is running before you run the upgrade. This is to validate the Database Root User Password and Platform
Admin Database Password. You will need to input the passwords when upgrading through the GUI.
For first-time upgrades, you need to use the same response file you used for your silent install. See Silent Installation.
Irrespective of first time or subsequent upgrades, you need to provide the relevant passwords in the response file.
GUI Installation
If there is a newer version of Enterprise Console available, you can begin the upgrade process by downloading and installing the latest version from the Ap
pDynamics download site on top of the existing application.
Before starting, get the Enterprise Console installer version appropriate for your target system. You can get the installer from the AppDynamics download
site. When ready, follow these steps to install the Enterprise Console:
Linux
./platform-setup-64bit-linux.sh
Windows
platform-setup-64bit-windows.exe
It is recommended that you right-click the exe and select Run as Administrator.
3. After the GUI launches, use it to complete the installation. In Linux, you may also follow the steps in the installation wizard to complete the
console installation.
If you install the Enterprise Console on AWS, use the public DNS for the Enterprise Console host name when prompted.
Silent Installation
To use silent mode, pass the response file that the installer generated at first installation to the installer. This response file is at the following location
<Enterprise_Console_home_directory>/.install4j/response.varfile
If you have made any changes to the settings as originally configured by the installer—such as to the connection port numbers, tenancy mode, or data
directory—make the same change in the response file before starting the upgrade.
platformAdmin.databasePassword= ENTER_PASSWORD
platformAdmin.databaseRootPassword= ENTER_PASSWORD
platformAdmin.adminPassword= ENTER_PASSWORD
After Upgrading
After upgrading the Enterprise Console, you must first clear your browser cache before you can successfully log in to the Enterprise Console through the
browser.
This page describes how to remove the Enterprise Console software and associated files using the uninstaller utility located in the Enterprise Console
directory.
Before Starting
Uninstalling the Enterprise Console will not uninstall any of the components it has deployed or managed since the Enterprise Console installer is agnostic
of the Controller and other services. Therefore, you do not need to first uninstall any components before uninstalling the Enterprise Console. However, if
you accidentally uninstall the Enterprise Console without first uninstalling any components, then you will have to manually manage the remaining platforms
and components. Leftover environments can be also be rediscovered using another Enterprise Console application.
On Linux, open a terminal window and switch to the user who installed the Enterprise Console or to a user with equivalent directory
permissions.
On Windows, open an elevated command prompt by right-clicking on the Command Prompt icon in the Windows Start Menu and
choosing Run as Administrator.
2. From the command line, navigate to the Enterprise Console home directory.
3. Execute the uninstaller script to uninstall the Enterprise Console, as follows:
On Linux:
./installer/uninstallPlatform
On Windows:
run installer/uninstallPlatfom.exe
./installer/uninstallPlatform -q
With this option, you do not need to interact with the installer to complete the removal.
On the Enterprise console host and any of the hosts that the Enterprise Console manages, <platform home dir>/jre
and <platform home dir>/orcha are not removed.
What is the difference between the Enterprise Console installer and the Controller installer?
The Enterprise Console installer only installs the Enterprise Console application. You need to use the Enterprise Console later on to install the Controller
as well as other AppDynamics platform components.
The Controller Installer is used to install only the Controller application. From 4.4, AppDynamics does not support the Controller installer anymore. You
should use the Enterprise Console instead to install, monitor, upgrade, and configure the Controller.
Which hosts does the Enterprise Console need SSH access to?
The Controller and Events Service hosts.
How many SSH connections does the Enterprise Console make per minute?
For each single Controller node (HA Controller deployments will have two Controllers), the Enterprise Console will open approximately 10 SSH
connections per minute. For each Events Services node (an Events Service cluster may have 3–5 nodes), the Enterprise Console will open 1 SSH
connection per minute.
What protocols does the Enterprise Console use to connect to remote hosts?
The Enterprise Console uses Java Secure Channel (JSch) API with the provided key file to access remote hosts. In scenarios where you have an SSH
jump server or jump host configuration, you will have to invest in additional provisions for your application to work. Consult your AppDynamics
representative in such cases.
Can I use Express installation to discover and upgrade existing controllers and events services?
Express installation is a convenient way to create a complete platform on a single host in one step. It does not support discovering and upgrading an
existing controller managed outside the Enterprise Console.
If you need to scale up the Events Service on Windows, see Install the Events Service on Windows for instructions.
Related pages:
This page introduces you to the tasks involved with deploying AppDynamics to its operating environment, including host preparation and Controller
installation.
The system resources of the machine that hosts the Controller in a live environment must be able to support the expected workload.
Deployment Overview
Installing AppDynamics to a test or evaluation setting typically involves verifying system requirements, preparing the host, and then performing the
Controller installation. These topics are described in Prepare the Controller Host and Install the Controller Using the Enterprise Console.
Deploying the Controller to its production operating environment normally introduces additional requirements and considerations. Security, availability,
scalability, and performance all play an important role in production deployment planning. The following section lists the tasks related to deploying the
Controller.
Deployment Tasks
Depending on your specific requirements and environment, deployment tasks may include:
Ensure that target systems meet the Controller System Requirements for the Controller's expected workload.
Implement Controller High Availability to ensure service continuity in the event of a failure of the Controller server.
Configure the network environment. If deploying the Controller with a reverse proxy, configure passthrough of Controller traffic. Also, note other Ne
twork Requirements for the deployment environment.
Implement security requirements for your environment. If clients will connect to the Controller by HTTPS, install your custom SSL server certificate
on the Controller.
Generate a password management strategy for the built-in system accounts in the Controller and platform.
Make sure the mail server is properly configured for the Controller in the target environment and define your alerting strategy.
Devise your backup strategy. A typical backup strategy consists of frequent partial backups with intermittent full backups.
Plan your configuration maintenance and enhancement strategy. Changes to the configuration should be staged in a non-critical environment,
and rolled into the live environment only after thorough testing. The AppDynamics UI and REST API offer the ability to export and import
configuration settings from various contexts.
Deploying App Agents is likely to be an ongoing task, especially in dynamic environments where monitored systems are regularly taken down and
new ones brought up. There are two basic strategies for deploying large numbers of App Agents across a managed environment:
1. Deploy the agents independently of the application inside the application server. This method ensures that re-deployments of the
application do not overwrite the agent deployment.
2. Integrate deployment of AppDynamics agents into the deployment of applications. This more sophisticated approach requires modifying
the existing application deployment automation scripts.
For details, see:
Automate Agent Deployment
Unattended Installation for .NET
Network Requirements
Deploying the Controller often calls for configuration changes to existing network components, such as firewalls or load balancers in the network. If the
Controller will reside behind a load balancer or reverse proxy, you need to set up traffic forwarding for the Controller. You may also need to open ports
used by AppDynamics on firewalls or any other device through which traffic must traverse.
The following are general considerations for the environment in which you deploy AppDynamics. See AppDynamics Quick Start for other network
configuration requirements.
They do so by retrieving the time from the Controller every five minutes. App Agents then compare the Controller's time to its own local machine's clock
time. If the times are different, whether ahead or behind, it applies a time skew based on the difference to the timestamps for the metrics it reports to the
Controller.
If, despite the agent's attempt to report metrics based on the Controller time, the Controller receives metrics that are time-stamped ahead of its own time,
the Controller rejects the metrics. To avoid this possibility, AppDynamics recommends maintaining clock-time consistency throughout your monitored
environment.
Usernames and passwords should not include the & or ! characters. If a user account needs to access the Controller REST API, additional limitations on
the use of special characters in usernames apply. See Create and Manage Tenant Users.
You choose the tenancy mode at installation time. You can switch the tenancy mode from single-tenant to multi-tenant mode later. It is not possible to
switch from multi-tenant to single-tenant mode.
Having a single tenancy Controller is suitable for most installations. Only very large installations or installations that have very distinct sets of users may
require multi-tenancy.
In multi-tenant mode:
You can create multiple accounts (tenants) in the Controller.
Each account will have its own set of users and applications.
The Controller login page includes an additional field where users need to choose an account to log in to.
Essentially, multi-tenant mode allows you to partition users and access application data in a logical, secure way.
In single-tenant mode:
There is only one account (tenant) in the Controller system.
All users and applications are part of this single built-in account, so all users have access to all monitored Applications in this mode.
The account is not exposed to users in the Controller UI. The account field in the login page is omitted for single-tenant mode.
AppDynamics recommends a single-tenant mode for most installations.
1. If you are already logged in to the Controller from your Browser, you must first log out as a non-administrative user of the Controller.
2. Log back into the Controller using the following credentials:
Account: system
User: root
password: <root password>
3. Select Appdynamics Controller application.
4. Select Tiers and Nodes > Node1 > Memory to review the memory trend for the past 24 hours and locate the time when the performance issues
occurred.
Related pages:
This page describes hardware and software requirements for the Controller hosted on private or public cloud to help you prepare for your AppDynamics
deployment.
Note
The Controller requirements do not include Enterprise Console and Event Service. You need to prepare memory for each of those components.
Before installation, it's usually easiest to estimate your deployment size based on the number of nodes. For Java, for example, a node corresponds to a
JVM. However, the best indicator of the actual workload on your Controller is provided by the metric ingestion rate.
After initial installation, you should verify your Controller sizing using the metric upload rate. You then need to continue to monitor the Controller for
changing workload brought about by changes in the monitored application, its usage patterns, or in the AppDynamics configuration.
The Controller should run on a dedicated machine. A production Controller must run on a dedicated machine. The requirements here assume that
no other major processes are running on the machine where the Controller is installed, including no other Controllers.
The Controller is not supported on machines that use Power Architecture processors, including PowerPC processors. The Controller is supported
on amd64 / x86-64 architectures.
Ensure that the Controller host has approximately 200 MB of free space available in the system temporary directory.
Disk I/O is a key element to Controller performance, particularly low latency. See Disk I/O Requirements for more information.
Controller Sizing
The following table shows Controller installation profiles by metric ingestion rate and node count. As previously noted, the actual metrics generated by a
node can vary greatly depending on the nature of the application on the node and the AppDynamics configuration. Be sure to validate your sizing against
the metric ingestion rate before deploying to production.
Medium 1,000,000 1,500 Linux or Bare-metal: 8 Cores, 5 TB SAS SSDs. Hardware-based RAID 5 configuration
Windows 128GB RAM
Extra For deployments that exceed the Large profile configuration defined here, contact AppDynamics Professional Services for a thorough viability
Large evaluation of your Controller.
AWS Profile Max Max Agents OS Compute Instance Size for Block Storage (for Controller application files only)*
with Aurora Metrics (Approx) AWS Aurora Storage
/Minute
Medium 1,000,000 1500 Linux EC2: r4. db.r4.4xlarge 10 GB GP2 EBS Volume. We recommend using a different
2xlarge volume than the instance's root volume.
Large 5,000,000 10000 Linux EC2: r4. db.r4.16xlarge 10 GB GP2 EBS Volume. We recommend using a different
8xlarge volume than the instance's root volume.
* The specified disk space must be available for use by the Controller. Specifications do not include overhead from the operating system, file system, and
so on.
For AWS, provision an ENI for each Controller host and link the license to the MAC address of the ENI. For more information about ENI, see the AWS
documentation at the following link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html.
Large installations are not supported on virtual machines or systems that use network-attached storage.
The RAM recommendations leave room for operating system processes. However, the recommendations assume that no other memory intensive
applications are running on the same machine. While the Enterprise Console can run on the same host as the Controller in small or demo profile
Controllers, it is not recommended for medium and larger profiles or for high availability deployments. Refer to the Enterprise Console
Requirements if the Enterprise Console is on the same host as the Controller.
Disk sizing shown in the sizing table represents the approximate space consumption for metrics, about 7 MB for each metric per minute.
The motherboard should not have more than 2 sockets.
See Calculating Node Count in .NET Environments for information related to sizing a .NET environment.
The agent counts do not reflect additional requirements for EUM or Database Visibility. See the following sections for more information.
This disk I/O must perform such that the maximum write latency for the Controller’s primary storage must not exceed 3 milliseconds while the
Controller is under sustained load. AppDynamics cannot provide support for Controller problems resulting from excessive disk latency.
Self-monitoring must be set up for the Controller. Self-monitoring consists of a SIM agent that measures the latency of data partitions on the
Controller host, and the configuration needs to include dashboard and health rule alerts that trigger when the maximum latency exceeds 3 ms.
For details on Controller self-monitoring, contact your AppDynamics account representative.
The MySQL intent log is very sensitive to latency, and MySQL performs writes using varying block sizes.
It’s important for best performance that the stripe size of the RAID configuration matches the write size. The two write sizes are 16Kb (for the database)
and 128Kb (for the logs). You should use the smallest stripe size supported, but no smaller than 16Kb. If using a hardware-based RAID controller, be sure
that it supports these stripe sizes. The stripe size can be determined by the number of data disks multiplied by the strip/segment/chunk (the portion of data
stored on a single disk).
If you choose to deploy one of these latency-challenged storage technologies on a system that is expected to process 1M metrics/min or greater, a
mirrored NVMe configured as a write-back cache for all storage accesses is recommended. Configuring such a device will hide some of the longer
latencies that have been seen in these environments.
In all cases, be sure to thoroughly test the deployment with real-world traffic load before putting an AppDynamics Controller into a live environment.
Web RUM can increase the number of individual metric data points per minute by up to 22000
Mobile RUM can increase the number of individual metric data points per minute by as much as 15 to 25K per instrumented application if your
applications are heavily accessed. The actual number depends on how many network requests your applications receive.
Monitoring EUM is memory intensive and may require more space allocated to the metrics cache.
The number of separate EUM metric names saved in the Controller database can be larger than the kinds of individual data points saved. For
example, a metric name for a metric for iOS 5 might still be in the database even if all your users have migrated away from iOS 5. So the metric
name would no longer have an impact on resource utilization, but it would count against the default limit in the Controller for metric names per
application. The default limit for names is 200,000 for Browser RUM and 100,000 for Mobile RUM.
For on-premises installations, the machine running the Controller and Event Service will require the following additional considerations, for a data retention
period of 10 days:
For redundancy and optimum performance, the Events Service should run on a separate machine. For details on sizing considerations, see Events
Service Requirements.
The diagram shows three application pools — AppPool-1, AppPool-2, and AppPool-3 — with the following characteristics:
AppPool-1 and AppPool-3 can have a maximum of two worker processes (known as a web garden), containing two applications (AppA, AppB)
and one application (AppF), respectively.
AppPool-2 can have one worker process. It has three applications.
To determine the number of nodes, for each AppPool, multiply the number of applications by the maximum number of worker processes. Add those
together, as well as a node for the Windows service or standalone application processes.
The example would result in nine AppPool nodes. Adding one for a Windows service would result in a total of ten nodes, calculated as follows:
To find the number of CLRs that will be launched for a particular .NET Application/App Pool:
1. Open the IIS manager and see the number of applications assigned to that AppPool.
2. Check if any AppPools are configured to run as a Web Garden. This would be a multiplier for the number of .NET nodes coming from this
AppPool as described above.
Specifically, monitoring asynchronous calls increases the number of metrics per minute to a maximum number of 23000 per minute.
Related pages:
This page describes how to install the AppDynamics Controller using the CLI. You can also find information on settings you should have after installation. Al
ternatively, you can use the Enterprise Console GUI to install the Controller. For information about how to use the Enterprise Console to install the
Controller, see Enterprise Console.
The Controller can be installed on the same host as the one on which the Enterprise Console is running or a remote host. Installing on the same host is not
recommended, however, particularly for medium and large scale profiles or for high availability deployments.
You may also use the loopback address '127.0.0.1' or the machine's actual hostname.
Complete the following steps carefully if you choose to install the Controller on this shared host rather than on a remote host. Note that all services on
Windows machines must be installed on the Enterprise Console host since the Enterprise Console does not support remote operations on Windows.
In the <Installation directory>/platform-admin directory, run the following commands to install the Controller:
1. Create a platform:
<file path to the key file> is the private key file. The installation process deploys the key to the Controller host.
For the localhost:
The localhost does not require credentials. You can, therefore, skip this step, especially for Windows deployments. For more information, see Man
age Hosts.
3. Add the host.
For a remote host on Linux machines only:
On the localhost:
Note that these are the required parameters for installing a Controller with a demo profile size. For information about optional configuration options, run the
following command:
Installation Settings
Listening ports are configured for the Controller during installation. In GUI and CLI mode, the installation checks to make sure that each port it suggests is
available on the system before suggesting it. You only need to edit a default port number if you know it will cause a future conflict or if you have some other
specific reason for choosing another port.
Due to browser incompatibilities, AppDynamics recommends using only ASCII characters for usernames, passwords, and account names. The characters "
%" and "|" are allowed in the Controller root password.
http://<application_server_host_name>:<http-listener-port>/controller/rest/serverstatus
To apply a license file manually, copy the license file to the Controller home directory. After moving the license file, allow up to 5 minutes for the license
change to take effect.
While installation is in progress, you can find the log file in the platform-admin/logs/platform-admin-server.log.
During installation and setup, the Enterprise Console tries to start the Controller. This procedure can take some time. If Controller installation fails, you can
troubleshoot and identify the fix and retry from a checkpoint.
A High Availability (HA) Controller deployment helps you minimize the disruption caused by a server or network failure, administrative downtime, or other
interruptions. An HA deployment is made up of two Controllers, one in the role of the primary and the other as the secondary.
The Enterprise Console automates the configuration and administration tasks associated with a highly available deployment on Linux systems. Controller
HA pairs are not available on Windows Enterprise Console machines.
Essentially, to set up high availability for Controllers, you are configuring master-master replication between the MySQL instances on the primary and
secondary Controllers.
An important operational point to note is that while the databases for both Controllers should be running, both Controller application servers should never
be active (i.e., running and accessible by the network) at the same time. Similarly, the traffic distribution policy you configure at the load balancer for the
Controller pair should only send traffic to one of the Controllers at a time (i.e., do not use round-robin or similar routing distribution policy at the load
balancer).
In HA mode, each Controller has its own MySQL database with a full set of the data generated by the Controller. The primary Controller has the master
MySQL database, which replicates data to the secondary Controller's replica MySQL database. HA mode uses a MySQL Master-Master replication type of
configuration. The individual machines in the Controller HA pair need to have an equivalent amount of disk space.
The following figure shows the deployment of an HA pair at a high level. In this scenario, the agents connect to the primary Controller through a proxy load
balancer. The Controllers in an HA pair must be equivalent versions, and be in the same data center.
In the diagram, the MySQL instances are connected via a dedicated link for purposes of data replication. This is an optional but recommended measure for
high volume environments. It should be a high capacity link and ideally a direct connection, without an intervening reverse proxy or firewall. See Load
Balancer Requirements and Considerations on Set Up a High Availability Deployment for more information on the deployment environment.
The Controller app server process on the HA secondary can remain off until needed. Having two active primary Controllers is likely to lead to data
inconsistency between the HA pair.
When a failover occurs, the secondary app server must be started or restarted (if it is already running, which clears the cache).
To benefit from increased replication setup speeds, your server will need access to network resources capable of some hundreds of MB per
second. By specifying replication setup parallelism, you can radically reduce setup times.
For example, if a single rsync is using only one-fifth of the available network capacity, you can achieve maximum throughput for setup by
appending -P r5 to end of the replicate.sh command. If this level of network traffic interferes with the ongoing Controller operation, you
should monitor and adjust this setting.
If you are using HA Toolkit version 3.54 and later, append -P r5 to end of the replicate.sh command
If you are using the HA module with Enterprise Console (version 4.5.17 and later), you must add the --args
numberThreadForRsync=5 to the CLI
From the Enterprise Console UI, select Number of parallel rsync threads for incremental or finalize (depending on what stage you
are performing)
AppDynamics recommends that traffic routing be handled by a reverse proxy between the agents and Controllers, as shown in the figure above. This
removes the necessity of changing agent configurations in the event of a failover or the delay imposed by using DNS mechanisms to switch the traffic at
the agent.
If using a proxy, set the value of the Controller host connection in the agent configuration to the virtual IP or virtual hostname for the Controller at the proxy,
as in the following example of the setting for the Java Agent in the controller-info.xml file:
<controller-host>controller.company.com</controller-host>
For the .NET Agent, set the Controller high availability attribute to true in config.xml. See .NET Agent Configuration Properties.
If you set up automation for the routing rules at the proxy, the proxy can monitor the Controller at the following address:
http://<controller>:<port>/controller/rest/serverstatus
An active node returns an HTTP 200 response to GET requests to this URL, with <available>true</available> in the response body. A passive
node returns 503, Service Unavailable, with a body of <available>false</available>.
Controller installation pre-requisites for both servers are met. See Platform Requirements
Two dedicated machines running Linux. The Linux operating systems can be Fedora-based Linux distributions (such as Red Hat or CentOS) or
Debian-based Linux distributions (such as Ubuntu).
In a Controller HA pair, a load balancer should route traffic from the Controller clients (Controller UI users and App Agents) to the active
Controller. Before starting, make sure that a load balancer is available in your environment and that the virtual IP address for the Controller pair is
known as presented by the load balancer.
Open port number 3388 between the machines in an HA pair.
The login shell must be bash (/bin/bash).
A network link connecting the HA hosts can support a high volume of data. The primary and replica must be in the same data center, and there
must be a dedicated network link between the hosts.
Passwordless ssh has been set up between two Controller hosts. See Set Up the SSH Key.
SSH keys on each host allow ssh and rsync operations by the AppDynamics user.
The hosts file (/etc/hosts) on both Controller machines should contain entries to support reverse lookups for the other node in the HA pair.
Because Controller licenses are bound to the network MAC address of the host machine, the HA replica Controller requires an additional HA
license. You should request a secondary license for HA purposes in advance.
While adding high availability hosts as part of the add host operation, you determine and provide the remote user, of which the Controller needs to
be installed as. The platform path you specify (while creating the platform) must be writable on the two HA hosts for the remote user specified
during add host operation.
The following packages are installed on both Controller hosts, and the relevant installation commands are provided:
Command Yum based installer (RH, Centos, Amazon Linux) Apt based installer (Ubuntu)
This page describes how to set up and deploy Controllers as a high availability pair. For installation and upgrade details, see Custom Install and Upgrade
an HA Pair.
The Enterprise Console HA deployment works on Linux systems only. Controller HA pairs are not available on Windows machines using
Enterprise Console.
The servers of Controllers in an HA pair must be identical in terms of OS, CPU, RAM, and Disk. See Controller System Requirements.
You can:
Deploying Controllers as an HA pair ensures that service downtime in the event of a Controller machine failure is minimized. It also facilitates other
administrative tasks, such as backing up data. For more background information, including the benefits of HA, see Controller High Availability (HA).
Before Starting
For general guidelines and requirements on how to deploy HA in your environment, see Prerequisites for High Availability. Your environment must meet
the prerequisites.
/etc/sudoers.d/appdynamics contains entries to allow the AppDynamics user to access the /sbin/service utility using sudo without a
password. This mechanism is not available if the AppDynamics user is authenticated by LDAP.
/sbin/appdservice is a setuid root program distributed in source form in <controller_home>/controller-ha/init/appdservice.
c. It is written explicitly to support auditing by security audit systems. The install-init.sh script compiles and installs the program. It is
executable only by the AppDynamics user and the root user. The script requires a C compiler to be available on the system. You can install a C
compiler using the package manager for your operating system. For example, on Yum-based Linux distributions, you can use the following
command to install the GNU Compiler, which includes a C compiler:
Two IP addresses for each Controller machine, one for the HTTP primary port interface ( ) and one for the dedicated link interface between
the Controllers ( ) on each machine. The dedicated link is recommended but not mandatory.
If the Controllers will reside within a protected, internal network behind the load balancer, you also need an additional internal virtual IP for the
Controller within the internal network ( ).
When configuring replication, you specify the external address at which Controller clients, such as app agents and UI users, will address the Controller at
the load balancer. The Controllers themselves need to be able to reach this address as well. If the Controller will reside within a protected network relative
to the load balancer, preventing them from reaching this address, there needs to be an internal VIP on the protected side that proxies the active Controller
from within the network. This is specified using the -i parameter.
The load balancer can check the availability of the Controller at the following address:
http://<controller_host>:<port>/controller/rest/serverstatus
If the Controller is active, it responds to a GET request at this URL with an HTTP 200 response. The body of the response indicates the status of the
Controller in the following manner:
Ensure that the load balancer policy you configure for the Controller pair can send traffic to only a single Controller in the pair at a time (i.e., do not use
round-robin or similar routing distribution policy at the load balancer). For more information about setting up a load balancer for the Controller, see Use a
Reverse Proxy.
To reduce errors, use the correct format of /etc/hosts files. If you have both dotted hostnames and short versions, you need to list
the dotted hostnames with the most dots first and the other versions subsequently. This should be done consistently for both HA server
entries in each of the two /etc/hosts files. Note that in the examples provided, the aliases are listed last.
The following steps describe how to perform this configuration. The instructions assume an AppDynamics user named appduser, and the Controller
hostnames are node1, the active primary, and node2, the secondary. Adjust the instructions for your particular environment. Also note that you may not
need to perform every step (for example, you may already have the .ssh directory and don't need to create a new one).
Although not shown here, some of the steps may prompt you for a password.
su - appduser
2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
mkdir -p .ssh
chmod 700 .ssh
su - appduser
mkdir -p .ssh
chmod 700 .ssh
ssh-keygen -t rsa -N "" -f .ssh/id_rsa -m pem
scp .ssh/id_rsa.pub node1:/tmp
2.
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
a. On the Credential page, add the SSH credentials for the host you want to install the primary Controller on. You can also run the
following command on the Enterprise Console host:
Remember to provide the private key file for the Enterprise Console machine when adding a credential.
b. On the Hosts page, add the host using the credentials from above. You can also run the following command on the Enterprise Console
host:
The Installation Path is an absolute path under which all of the platform components are installed. The same path is used for
all hosts added to the platform. Use a path which does not have any existing AppDynamics components installed under it.
The path you choose must be writeable, i.e. the user who installed the Enterprise Console should have write permissions to
that folder. Also, the same path should be writable on all of the hosts that the Enterprise Console manages.
a. For each of the two hosts, enter their host machine information: Host Name, Username, and Private Key.
This is the location onto which the Controllers will be installed. For more information about how to add credentials and hosts, see Adminis
ter the Enterprise Console.
7. Install Controller:
a. Select Install.
b. Select a Profile size for your Controller. See Controller System Requirements for more information on the sizing requirements.
c. Enter the Controller Primary and Secondary hosts.
d. Enter the required Username and Passwords. The default Controller Admin Username is admin.
If you do not install a Controller at this time, you can always do so later by navigating to the Controller page in the GUI and
clicking Install Controller.
8. Click Install.
-s copies the file to the secondary host. Enter the MySQL database root user password in the command.
3. Change directories to <controller_home>/controller-ha/init.
4. Run install-init.sh as root user with one of the following options to select how to elevate the user privilege:
You must run this script on both Controller HA pair servers. If you need to uninstall the service later, run the uninstall-init.sh script.
The status and progress of the deployment's various components are written to the logs.
Additionally, you can use incremental replication to add a secondary Controller. See Initiate Controller Database Incremental Replication for more
information.
If you are starting from a fresh installation, you will need to first create a platform, then add two credentials and hosts for your HA pair.
a. On the Credential page, add the SSH credentials for the host you want to install the secondary Controller on. You can also run the
following command on the Enterprise Console host:
Remember to provide the private key file for the Enterprise Console machine when adding a credential.
b. On the Hosts page, add the host. You can also run the following command on the Enterprise Console host:
a. Select the Controller Secondary Host that you added for the secondary Controller.
b. Optional: Enter the External URL. This is the external load balancer URL, which should reflect this format: http(s)://<external.
vip>:<port>
c. Enter the DB Root Password, and re-enter it for confirmation.
Ensure to provide the same passwords during the secondary server installation as those that you provided for the primary
server.
4. Select Submit.
Your HA pair will automatically set up, each with their own MySQL node.
This page describes how to manage and troubleshoot Controllers as a high availability pair.
This platform should not be used for installing any other services.
b. Install a Controller.
c. Make sure to unselect the Install Events Service option before clicking Install.
3. Complete the monitoring setup by installing and configuring the App Agents and Machine Agents on your HA pair:
Set Up App Agents for Monitoring
Install and Set Up Machine Agents for Monitoring
1. SSH into the primary Controller box and update the primary Controller App Agent's controller-info.xml by running the following commands:
cd <controller-install-dir>/appserver/glassfish/domains/domain1/appagent
cp conf/controller-info.xml ver<version#>/conf/
-Dappdynamics.agent.applicationName=<app_name>, -Dappdynamics.agent.tierName=<tier_name>,
-Dappdynamics.agent.nodeName=<node_name>, -Dappdynamics.agent.accountName=<account_name>,
-Dappdynamics.agent.accountAccessKey=<access_key>
You can get your access key from the Controller UI: navigate to Settings> License > Account. Then click to show your
access key. Note, when you log in to the Controller, use the account specified in appdynamics.agent.accountName.
5. Scroll down the page and click Save. The job will apply these properties and restart both the primary and secondary Controllers.
6. In the Enterprise Console UI, select your Controller Monitor Platform, and navigate to the Controller page.
7. Click on External URL on the widget to open the monitoring Controller's UI.
8. Log in to the Controller. You should be able to see the monitoring application for both the primary and secondary Controllers.
1. Install the Machine Agent on the primary Controller box. Do not start the agent.
2. Repeat step 1 for the secondary Controller.
3.
1. Log in to the Enterprise Console and navigate to the Appserver Configurations page by clicking through Configurations, followed by Controller
Settings.
2. Deselect Enable Auto Failover and click Save.
3. SSH to the Controller machine where the Controller is installed.
4. Run the following commands on the Enterprise Console host:
bin/platform-admin.sh stop-controller-appserver
bin/platform-admin.sh start-controller-appserver
To start:
To stop:
Automatic Failover
The Enterprise Console monitors the health of the primary Appserver and database. If the Appserver or database is unresponsive, the Enterprise Console
will by default wait for five minutes before initiating a failover. This interval can be configured by updating the default value in the Domain Protocol text field
on the Appserver Configurations page under Controller settings.
You can also disable or enable automatic failover through the CLI.
Version 4.5.14 and above of the Enterprise Console comes with the High Availability (HA) modulewhich utilizes the Controller Watchdog for
auto-failover. If you want to enable or disable the auto-failover, then the watchdog script needs to be running or stopped.
To disable and enable the Controller Watchdog with CLI using the following commands:
This changes the Appserver on the secondary as primary and database on the secondary as the replication master. It also changes the old primary to
secondary.
The process for performing a failback to the old primary is the same as failing over to the secondary. Simply run the following command on the Enterprise
Console host:
Note that if it has been down for more than seven days, you need to revive the database, as described in the following section.
If replication fails, go to the secondary host and stop all rsync and ha-replicate.sh processes. Then try running the incremental-
replication job again.
3. Finalize the job by running the following command on the Enterprise Console host:
This stops the incremental replication loop. The command will restart the primary Controller, resulting in downtime.
4. Make sure replication is working by checking that there is no significant gap between the primary and secondary Controllers. You can run the
following command on the Enterprise Console host to check the replication status:
It may take a few minutes for the secondary status to catch up.
If replication fails, go to the secondary host and stop all rsync and ha-replicate.sh processes. Then try running the incremental-
replication job again.
3. Run the add secondary job. The Enterprise Console will perform a final rsync and add the secondary.
Until you trigger the add-secondary command, the secondary Controller is not added to the Enterprise Console platform. Therefore, the
Enterprise Console will not be able to perform any other operations on the secondary Controller.
If you need to stop replication, you can run the following command:
2.
3. Enter a number in the Number of parallel rsync threads field and click Submit. The default value is 1.
From the CLI, based on which replication you are performing, run either of the following commands from the Enterprise Console host and set the n
umberThreadForRsync argument.
3.
4. Click Submit.
From the CLI, run the following command from the Enterprise Console host to enable MySQL5.7 parallel replication. The default value is true.
1. On the Controller page, click Remove Controller, or run the following command on the Enterprise Console host:
4. Uncheck Remove Controller Cluster. If it is already unchecked, remove the secondary server.
5. Click Submit.
6. Add a secondary controller from the Controller page, or run the following command on the Enterprise Console host:
The Enterprise Console will onboard the secondary Controller and re-enable replication.
After setting up HA, perform a back up by stopping the Controller on the Enterprise Console and performing a file-level copy of the AppDynamics home
directory (i.e., a cold backup). When finished, simply restart the Controller from the Enterprise Console. The secondary will then catch up its data to the
primary.
When restoring the database from a back up in an HA or standalone environment, you should check that the primary and secondary servers ha.type and
ha.mode are set properly to active and passive, respectively.
Over time, if you need to make modifications to the Controller configuration, always do those changes in the Enterprise Console on the Controller Settings
page under Configurations. These changes will be preserved during upgrades. Any changes made outside the Enterprise Console will not be preserved
after upgrade.
Troubleshooting HA
123.45.0.2:
controller_database: running
controller_appserver: not running
reports_service: running
operating_system: Linux
controller_version: 004-004-001-000
controller_performance_profile: small
controller_ha_type: secondary
controller_appserver_mode: passive
bin/controller.sh login-db
bin/controller.sh login-db
b. Run the following commands to set the database to the primary or secondary:
3. Restart the database for the change to take effect on the Appserver:
If the secondary Appserver is already in a shutdown state, then there is no need to restart the database.
4. Verify the replication is healthy:
Failover Prevention
If failover is prevented on your Controller HA configuration, it may be due to one of two scenarios:
The secondary database is down. Failover cannot occur when the secondary database is not running.
To fix this issue:
1. Restart the secondary database by running the following command on the secondary host:
bin/controller.sh start-db
If this does not enable failover, then it may be due to the second scenario.
Database replication is not healthy. Failover is not allowed when the database replication is not healthy.
There are various reasons why this may be the case. Please work closely with your AppDynamics account representative to correct the issue.
As part of the new Controller HA Module in the Enterprise Console, Enterprise Console no longer manages the Controller failover. Instead, the new HA
Module installs the Controller watchdog on the Controller hosts and the Controller hosts are now responsible for performing a failover. The new HA Module
is packaged in Enterprise Console and allows you to migrate seamlessly from older HA implementations, such as the HA Toolkit (HATK), to this new HA
Module. The new HA Module is included with the latest version of Enterprise Console, and is installed when you install or upgrade your Controller.
However, it is not activated until you migrate an HA pair.
If your Controller version is earlier than version 4.5.13, then you must use the HA Toolkit (HATK) instead of the HA Module to install and
configure High Availability.
Scenario One: Deployment currently using both Enterprise Console and HA Toolkit
Scenario Two: Deployment where Enterprise Console alone manages the HA Pair
Scenario One: Upgrade Deployment that uses Enterprise Console and HA Tookit
1. Download and install the latest version of Enterprise Console from the Downloads page.
2. Open Enterprise Console and select Controller. In the Controller list, it displays only the primary Controller of the HA Pair. If you have an existing
pair of HA Controllers which are not managed by Enterprise Console, you need to forget the Controller from within Enterprise Console. It will only
show the primary Controller.
3. Select the Controller and select Remove.
In the Remove Controller dialog, deselect Remove Binaries. At this point, you are just removing the Controller from Enterprise
Console but not the AppDynamics software that is running on the HA Controllers.
4. Before we can activate the new HA Module, we need to discover both Controllers from within Enterprise Console. Once the remove Controller job
completes, select Controller > Discover & Upgrade Controller.
6. Complete the fields of the Discover Controller dialog and select Continue. This adds the HA Pair to Enterprise Console where you can then
manage and upgrade.
Access the Jobs page to see several jobs that have completed successfully. The Controller Discover & Upgrade Job will take a while to complete.
Select View Details to track the progress of the tasks involved to discover and upgrade the Controller.
When the discovery and upgrade process for the HA Pair is complete, the Controller page should be similar to the following:
Now, Enterprise Console knows about the HA Pair and has copied the new HA Module to the primary and secondary Controllers in the HA Pair.
7. To activate the HA Pair, open a command shell on the Enterprise Console host and enter the following:
./uninstall-init.sh
d. Run install-init.sh and install the same services with one of the following options to elevate the user privilege:
Watchdog Widget
After activating the HA Module, a new widget displays in the Enterprise Console UI indicating the Controller Watchdog status. It has three states:
The Controller watchdog process is constantly checking the health of your primary
Controller. No user action is required.
In the event that your primary Controller goes down, the failover is not automatic.
AppDynamics recommends that you start the Controller watchdog using the Start
Controller Watchdog option in Enterprise Console.
Failover is currently in progress where your primary Controller and secondary Controller
switch roles
This section contains pages on administering the Controller. Depending on how you installed and deployed the platform, you may have more than one
method to administer the platform.
If you use the Enterprise Console, many tasks can be done through the GUI or command line. For information about how to use the Enterprise Console,
see the pages in the Enterprise Console section.
Additionally, AppDynamics provides command-line tools for common operations, such as starting and stopping the platform components and their
services. Some tasks involve using the administration interface for the underlying Glassfish application server, either the web interface or the command-
line tool.
The pages in this section describe how to use the tools along with other administration tasks.
You can start or stop the Controller, and also check the Controller health from the Enterprise Console GUI or CLI.
To avoid the possibility of data corruption errors, be sure to stop the application server and database gracefully by using the stop scripts before
shutting off or rebooting the machine on which the Controller is running.
You can start and stop the Controller and Controller services on the Controller page in the GUI or from the command line. When you start and stop the
Controller, services and processes related to the Controller also start and stop, including the Reporting Service. If you use the GUI to start and stop the
Controller, specify that you want to stop MySQL if you want to also stop the Controller Database.
If your Enterprise Console manages multiple platforms, to distinguish the Controller platform, you must use the command line for standalone or
secondary HA Controllers and specify the --platform-name <name_of_the_platform>.
For example:
To see all options, run platform-admin.sh list-jobs --service controller --platform-name <platform-name> from the
command line. For more information, see Enterprise Console Command Line or Administer the Enterprise Console.
The Enterprise Console has a max wait time of 45 minutes when starting or stopping the Controller. You can set a timeout which exits the command and
returns a failure by appending --args controllerProcessTimeoutInMin=<minutes> to the end of your start or stop command.
When using the Enterprise Console, starting or stopping the Controller will also start or stop the Reporting Service.
You can only start or stop the Secondary Controller.
You should disable auto-failover before restarting a primary Controller. Do not forget to reenable auto-failover afterward.
If you enabled auto-failover in the Enterprise Console, and you stopped the app server to update certifications, the Enterprise Console
will trigger a failover if it takes longer than five minutes to update.
If you are using a combination of the Enterprise Console with the High Availability Toolkit (HATK), then you can start or stop the
Controller using services.
On Linux
To start the database, run this script from the Controller home:
bin/controller.sh start-db
bin/controller.sh stop-db
bin/platform-admin.sh check-controller-health
The following output shows the status of the Controller and its uptime:
This page describes how to set up the Controller behind a reverse proxy.
The proxy provides a security layer for the Controller, but it also enables you to move a Controller to another machine or switch between high availability
pairs without having to reconfigure agents.
As shown, the reverse proxy listens for incoming requests on a given path, /controller in this case, on port 80. It forwards matching requests to the HTTP
listening port of the primary Controller at appdhost1:8090.
In terms of network impact in this scenario, switching active Controllers from the primary to the secondary in this scenario only requires the administrator to
update the routing policy at the proxy so that traffic directed to the secondary instead of the primary.
These instructions describe how to set up the Controller with a reverse proxy. They also provide sample configurations for a few specific types of proxies,
including NGINX and Apache Web Server.
This information is intended for illustration purposes only. The configuration requirements for your own deployment are likely to vary depending on your
existing environment, the applications being monitored, and the policies of your organization.
While AppDynamics supports Controllers that are deployed with a reverse proxy, AppDynamics Support cannot guarantee help with specific set up
questions and issues particular for your environment or the type of proxy you are using. For this type of information, please consult the documentation
provided with your proxy technology. Alternatively, ask the AppDynamics community.
General Guidelines
The following describes general requirements, considerations, and features for deploying the AppDynamics Controller and App Agents with a reverse
proxy.
Set the deep link JVM option, -Dappdynamics.controller.ui.deeplink.url, to the value of the Controller URL. Use either the hostname
or IP address of the Controller host (if directly accessible to clients) or to the VIP for the Controller as exposed at the proxy in the following format:
Use the URI scheme (http or https), hostname and port number appropriate for your Controller. The Controller uses the deep link value to
compose URLs it exposes in the UI.
If terminating SSL at the proxy, also set the following JVM options:
-Dappdynamics.controller.services.hostName=<external_DNS_hostname>
-Dappdynamics.controller.services.port=<external_port_usually_443>
You should use the Enterprise Console's Controller Settings page to edit the JVM options to retain your settings. See Update Platform
Configurations for more information.
If the proxy sits between monitored tiers in the application, make sure that the proxy passes through the custom header that AppDynamics adds
for traffic correlation, singularity header. Most proxies pass through custom headers by default.
For App Agents, the Controller Host and Controller Port connection settings should point to the VIP or hostname and port exposed for the
Controller at the reverse proxy. For details see Agent-to-Controller Connections.
If using SSL from the agent to the proxy, ensure that the security protocols used between the App Agent and proxy are compatible. See the
compatibility table for the SSL protocol used by each version of the agent.
If the proxy (or another network device) needs to check for the availability of the Controller, it can use Controller REST resource at: http://<hos
t>:<port>/controller/rest/serverstatus. If the Controller is active and if in high availability mode, is the primary, it returns an XML
response similar to this one:
The following sections provide notes and sample configurations for a few specific types of proxies, including Nginx and Apache Web Server.
To use Nginx as a reverse proxy for the Controller, simply include the Controller as the upstream server in the Nginx configuration. If deploying two
Controllers in a high availability pair arrangement, include the addresses of both the primary and secondary Controllers in the upstream server definition.
The following steps walk you through the set up at a high level. It assumes you have already installed the Controller and have an Nginx instance, and you
only need to modify the existing configuration to have Nginx route traffic to the Controller.
server {
listen 80;
server_name appdcontroller.example.com;
expires 0;
add_header Cache-Control private;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://appdcontroller;
}
}
In the sample, the Controller resides on 127.0.15.11 and has the fully qualified domain name appdcontroller.example.com.
6. Restart the Nginx server to have the change take effect.
7. Restart the Controller.
After the Controller starts, it should be able to receive traffic through Nginx. As an initial test of the connection, try opening the Controller UI via the proxy,
that is, in a browser, go to http://<virtualip>:80/controller. For the App Agents, you'll need to configure their proxy host and port settings as
described in the general guidelines above.
apache2ctl -M
proxy_module (shared)
proxy_http_module (shared)
b. Restart Apache:
6. Add the proxy configuration to Apache. For example, a configuration that directs clients requests to the standard web port 80 at the proxy host to
the Controller could look similar to this:
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyRequests Off
ProxyPreserveHost On
7. Apply your configuration changes by reloading Apache modules. For example, enter:
After the Controller starts, test the connection by opening a browser to the Controller UI as exposed by the proxy. To enable AppDynamics App Agents to
connect through the proxy, be sure to set the proxy host and port settings in the proxy, as described in the general guidelines above. Also, be sure to apply
any of the other general guidelines described in the general guidelines above.
This section provides a sample configuration for Nginx, but the concepts translate to other types of reverse proxies as well.
Ensure that the App Agents can establish a secure connection with the proxy. See Agent and Controller Compatibility for SSL settings for various
versions of the agent. Ensure that the proxy includes a server certificate signed by an authority that is trusted by the agent. Otherwise, you will
need to install the proxy machine's server key.
If using .NET App Agents in your environment, verify that the reverse proxy server uses a server certificate signed by a certificate authority (CA).
The .NET App Agent does not permit SSL connections based on a self-signed server certificate.
Configure the proxy to forward traffic between it and the Controller to a secure port between it and the client.
A complete example configuration with Nginx performing SSL termination for the Controller would look something like this:
upstream appdcontroller {
server 127.0.15.11:8191 fail_timeout=0;
}
server {
listen 80;
server_name appdcontroller.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name appdcontroller.example.com;
ssl on;
ssl_certificate /etc/nginx/server.crt;
ssl_certificate_key /etc/nginx/server.key);
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+EXP;
ssl_prefer_server_ciphers on;
expires 0;
add_header Cache-Control private;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect http:// https://;
proxy_pass http://appdcontroller;
}
}
This example builds on the configuration shown in the simple passthrough example. In this one, any request received on the non-SSL port 80 is routed to
port 443. The server for port 443 contains the settings for SSL termination. The ssl_certificate_key and ssl_certificate directives
should identify the location of the server security certificate and key for the proxy.
The configuration also indicates the SSL protocols and ciphers accepted for connections. The security settings
need to be compatible with the AppDynamics App Agent security capabilities, as described on the Agent and
Controller Compatibility page.
Cookie Security
The Controller sets the secure flag on the X-CSRF-TOKEN cookie sent over HTTPS. The secure flag ensures that clients only transmit the cookies on
secure connections.
If you terminate HTTPS at a reverse proxy in front of the Controller, the Controller does not set cookie security by default because connections to the
Controller would occur over HTTP.
To ensure cookie security, configure the reverse proxy to include the secure statement in the set-cookie statement. How to add the secure flag varies
on the type of reverse proxy you use.
The following example shows how to set cookie security using HAProxy:
proxy_pass https://appdcontroller;
To complete the configuration, make sure you have configured SSL on the Controller as described in Controller SSL and Certificates.
The SMTP email server must be configured to enable email and SMS notifications and digests to be sent by the Controller.
Permissions
For this activity, users need the predefined Account Owner role or another role with the Configure Email / SMS permission.
A SaaS Controller should be preconfigured with the appropriate settings, but verify the settings as the following:
No authentication is needed.
For an on-premises controller, use a host and port settings for an SMTP server available in the controller deployment environment.
3. Customize sender address in notifications emails in the From Address field. By default, emails are sent by the root Controller user.
4. If the SMTP host requires authentication, configure the credentials in the Authentication settings.
5. If you want to add any text to the beginning of the notification, enter it in the Notification Header Text field.
6. If you are using SMS do one of the following:
Select Default and choose one of the available carriers from the pulldown menu.
Select Custom and enter the phone number receiving the message as <phone number>@<sms gateway>.
For example, a mobile phone in the United States serviced by AT&T might be:
4151234567@txt.att.net
7412345678@txtlocal.co.uk
See SMS gateway by country for information on most common SMS gateways.
7. Test the configuration by sending an email.
8. Save the settings.
Troubleshoot Notifications
If you do not receive notifications for health rule violations, it could be because the default SMTP server timeout period is too short. To troubleshoot, increas
e the value of the mail.smtp.socketiotimeout Controller Setting in the Administration Console. The default value is 30 seconds.
This page describes how to change the passwords for the root user and Glassfish admin for the Controller.
The root user is a superuser for the Controller. Unlike other types of users, you cannot remove the root user account or create other superuser accounts in
the Controller. The password for the root user is set during installation, but you can change the root password in the Administration Console.
While the root user has global administrative privileges, account administrators act as administrators only within individual accounts in a multi-tenant
Controller. It's typically the role of the root user to create accounts and an initial administrator for the account, and the role of each account administrator to
create additional users within the account. See Roles and Permissions and Manage Users and Groups.
Logging in to the Administration Console requires you to have the root user password. If you do not have the root user password, you need to
reset it.
1. From the command line, change to the Controller's bin directory. For example, on Linux:
cd <controller_home>/bin
2. Use the following script to log in to the Controller database of the Controller;
For Windows: controller.bat login-db
For Linux: sh controller.sh login-db
You will see a MySQL prompt.
3. After running the script, you will be prompted to enter a password. Enter the root password for the Controller database.
4. From the MySQL prompt, enter the following SQL command to get root user details:
The hash for the password will be upgraded to PBKDF2 when you log in.
6. Restart the Appserver.
To update the GlassFish admin user password in the Enterprise Console, see the following steps:
If you have updated the password in the Configurations tab, then you don't need to input the password again for the upgrade job.
cd <controller_home_dir>
bin/controller.sh stop
If you are using MS Windows, you must use the Windows services to stop the Controller.
The insecure option starts the database without password requirements. Use this option only to reset the password for the database. The option
is similar to starting MySQL with the --skip-grant-tables option.
4. To log in to the database, enter:
use mysql;
b.
FLUSH PRIVILEGES;
select version();
MySQL 5.7 version: Configure the new password for the root user by entering:
FLUSH PRIVILEGES;
quit
bin/controller.{sh|bat} stop-db
bin/controller.sh start
If you are using MS Windows, you must use the Windows services to start the Controller.
In the case of a Controller HA pair, generate an obfuscated password file for the Controller Database root user using the below
command on the primary Controller server. This command will generate the password file on both primary and secondary Controller
servers.
controller-ha/set_mysql_password_file.sh -p <new-password-here> -s
<secondary_controller_hostname>
Downtime is required to change the Controller database root user password. If you have installed a Controller HA pair, you must disable auto-
failover to avoid an accidental failover while changing the password. For more details about disabling auto-failover, click Automatic Failover.
You may need to change the user who is running the Controller services during a system migration or other event.
The procedure varies based on whether you are using the Enterprise Console.
In order to change the owner of the Controller, complete the following steps.
1. Stop all running Controller services using the following command in the machine terminal.
./controller.sh stop
CONTROLLER_HOME_DIR/bin/controller.sh stop
2. Change the ownership (recursively) of the entire Controller directory to the new user. In this example, appdynamics:admin is the user:group,
respectively:
3. If the Controller's data directory is outside of the root Controller's folder, then you must also change the owner of the database data files:
CONTROLLER_HOME_DIR/db/db.cnf
CONTROLLER_HOME_DIR/bin/controller.sh start
2. As the current user running the Controller services, shut down the Controller process.
CONTROLLER_HOME_DIR/bin/controller.sh stop
3. Change the ownership (recursively) of the entire Controller directory to the new user. In this example, appdynamics:admin is the user:group,
respectively:
4. If the Controller's data directory is outside of the root Controller's folder, then you must also change the owner of the database data files:
CONTROLLER_HOME_DIR/db/db.cnf
CONTROLLER_HOME_DIR/bin/controller.sh start
7. From the Enterprise Console, remove the hosts that were added:
8. Remove the credentials because the credentials are connected to the previous user.
9. Add the credentials using the new user, and then add the host.
10. Perform a Discover and Upgrade for the Controller.
11. (Optional) If you have installed the Linux services, then:
a. Logged in as root, uninstall the services:
HA/uninstall-init.sh
HA/install-init.sh
This page provides instructions for changing the user running the Controller services.
You may need to change the user who is running the Controller services during a system migration or other event.
The procedure varies based on whether you are using the Enterprise Console.
CONTROLLER_HOME_DIR/bin/controller.sh stop
2. Change the ownership (recursively) of the entire Controller directory to the new user. In this example, appdynamics:admin is the user:group,
respectively:
3. If the Controller's data directory is outside of the root Controller's folder, then you must also change the owner of the database data files:
CONTROLLER_HOME_DIR/db/db.cnf
CONTROLLER_HOME_DIR/bin/controller.sh start
2. As the current user running the Controller services, shut down the Controller process.
CONTROLLER_HOME_DIR/bin/controller.sh stop
3. Change the ownership (recursively) of the entire Controller directory to the new user. In this example, appdynamics:admin is the user:group,
respectively:
4. If the Controller's data directory is outside of the root Controller's folder, then you must also change the owner of the database data files:
CONTROLLER_HOME_DIR/db/db.cnf
CONTROLLER_HOME_DIR/bin/controller.sh start
7. From the Enterprise Console, remove the hosts that were added:
8. Remove the credentials because the credentials are connected to the previous user.
9. Add the credentials using the new user, and then add the host.
10. Perform a Discover and Upgrade for the Controller.
11. (Optional) If you have installed the Linux services, then:
a. Logged in as root, uninstall the services:
HA/uninstall-init.sh
HA/install-init.sh
This page provides information and access instructions for the AppDynamics Administration Console.
Deployment Support
Related pages:
Do not confuse the AppDynamics Administration Console with the GlassFish application server administration console or the general application
administration page in the Controller UI.
The AppDynamics Administration Console lets you configure certain global settings such as metric retention periods, UI notification triggers, tenancy
mode, and accounts in multi-tenancy mode.
AppDynamics recommends that you do not change Controller settings in the console unless
under the guidance of an AppDynamics representative or as specifically directed by
documentation.
http://<hostname>:<port>/controller/admin.jsp
The console is served on the same port as the Controller UI, port 8090 by default.
3. Log into the system account with the root user password. The root user is a built-in global administrator for the Controller. Use the password you
set for this user when installing. See Update the Root User and Glassfish Admin Passwords
The root user password is different from a normal AppDynamics account password. It is not the same as the account owner or account
administrator password. If you are logged into the Controller using your current account, you need to log out of that account and then
back in as the root user to access the Administration Console. You can change the Controller root user password if you wish. See Upda
te the Root User and Glassfish Admin Passwords
http://localhost:4848
Note that port 4848 is the default port number for the GlassFish administration console, but it may have been set to another value at installation
time. If the default port doesn't work and you are unsure of what port number to use, you can check the port configured for the network-listener
element named admin-listener in the domain.xml file.
2. Log in as user admin.
By default, the GlassFish user admin password is the same as the root user password for the Administration Console. You can change the
GlassFish user admin password if you wish. See Update the Root User and Glassfish Admin Passwords
The Controller logs users out of Controller UI sessions after 60 minutes of inactivity by default. For an on-premises Controller, it's possible to modify the
default timeout value, as follows:
http.session.inactive.timeout: The amount of time without a client request to the Controller after which the user session times
out and the user will need to log in again to continue. The default is 3600 seconds (60 minutes).
ui.inactivity.timeout: The amount of time without user activity in the Controller UI after which the user session times out and the
user will need to log in again to continue. The default is -1 (disabled).
Related pages:
You can customize system events and system use notification messages from the Administration Console.
You can configure which type of events appear as notifications in the UI, as described here.
DISK_SPACE There is an issue with the amount of disk space left on your system.
CONTROLLER_AGENT_V A mismatch between the version of the agent and the version of the controller has been detected.
ERSION_INCOMPATIBI
LITY
CONTROLLER_EVENT_U The limit on the number of events per minute that can be uploaded to the controller from this account has been reached.
PLOAD_LIMIT_REACHED Once the limit is reached no more events — other than certain key ones — are uploaded for that minute.
CONTROLLER_RSD_UPL The limit on the number of request segment data (RSDs) per minute that can be uploaded to the controller from this account
OAD_LIMIT_REACHED has been reached. RSDs are related to snapshots. Once the limit is reached no more RSDs — other than certain key
ones — are uploaded for that minute.
CONTROLLER_METRIC_ The limit for registering metrics for this account has been reached. No further metric registrations are accepted.
REG_LIMIT_REACHED
CONTROLLER_ERROR_A The limit for registering error Application Diagnostic Data (ADDs) for this account has been reached. No further error ADD
DD_REG_LIMIT_REACH registration is accepted.
ED
CONTROLLER_ASYNC_A The limit for registering async ADDs for this account has been reached. No further async ADD registration is accepted.
DD_REG_LIMIT_REACH
ED
AGENT_ADD_BLACKLIS If the Agent attempts to register an ADD above the limit, the Controller rejects the attempt and adds the ADD to a blacklist.
T_REG_LIMIT_REACHED There is a limit to the size of the blacklist. This event indicates that that limit has been reached.
AGENT_METRIC_BLACK If the Agent attempts to register a metric above the limit, the Controller rejects the attempt and adds the metric to a
LIST_REG_LIMIT_REA blacklist. There is a limit to the size of the blacklist. This event indicates that that limit has been reached.
CHED
This is a U.S. Government computer system, which may be accessed and used only for authorized Government
business by authorized personnel. Unauthorized access or use of this computer system may subject
violators to criminal, civil, and/or administrative action. All information on this computer system may
be intercepted, recorded, read, copied, and disclosed by and to authorized personnel for official
purposes, including criminal investigations. Such information includes sensitive data encrypted to
comply with confidentiality and privacy requirements. Access or use of this computer system by any
person, whether authorized or unauthorized, constitutes consent to these terms. There is no right of
privacy in this system.
This page describes how to create and manage accounts in a multi-tenant Controller. The tenant mode determines whether the Controller UI offers single
or multiple environments. See Controller Deployment.
Switching from single-tenancy to multi-tenancy mode is supported. However, switching from multi-tenancy to single-tenancy is not. Take
precautions to ensure multi-tenancy is the correct mode for your environment.
If multi-tenancy is enabled for an on-premises Controller, users must enter the account name in the Account field when logging in to the Controller UI.
The overall license limits applicable at the Controller level are independent of any specific limits you apply at the account level.
Agent-based Licensing: For example, if an account is set up with a Java Agent limit of 100, you can ensure that the new account never interferes
with the license availability of another account by setting the Java Units Provisioned value for the account to a much smaller limit. However, if
you set it to 100 and other accounts are also set to that amount, the first 100 agents that connect to the Controller would occupy those
units, regardless of the accounts they report in to. Similarly, you can limit the lifespan of the account by setting an expiration date for the license.
Infrastructure-based Licensing: For example, if an account is set up with an Infrastructure Monitoring limit of 100, you can ensure that the new
account never interferes with the license availability of another account by setting the Infrastructure Monitoring value for the account to a much
smaller limit. However, if you set it to 100 and other accounts are also set to that amount, the first servers with CPU cores totalling up
to 100 would occupy those units, regardless of the accounts they report in to. Similarly, you can limit the lifespan of the account by setting an
expiration date for the license.
4. When finished defining entitlements, click Save .
After enabling multi-tenant mode, users must specify the account they want to log into in the Account field in the Controller UI login screen. See:
Related pages:
Reports
Port Settings
The Reporting Service is a standalone Controller process responsible for generating and transmitting reports. The Controller uses the Reporting Service to
send both one-time reports and scheduled reports. For more information about the Reporting Service, see Fonts Needed for the Reporting Service and Inst
allation Settings.
You can configure the Reporting Service with files in the following directory:
<Controller home>/reporting_service/reports/config
You can configure Reporting Service behavior in the user-config.json file. Any configuration changes made in user-config.json override default
behavior specified in default-config.json.
"reportServer": {
"port": "8020",
"portSecure": "8021",
}
"reportServer": {
"port": "8020",
"portSecure": "0",
}
Alternatively, for security reasons if you want to force https, and http you can change the "port" to "0":
"reportServer": {
"port": "0",
"portSecure": "8021",
}
After making the change, stop and start the Reports Server as follows:
Windows:
cd <installroot>\controller\reporting_service\reports\bin
reports-service.sh stop
reports-service.sh start
Linux/Mac:
"reportServer": {
"port": "8020",
"portHostname" : "",
"portSecure": "8021",
"portSecureHostname" : ""
},
"reportServer": {
"portHostname" : "localhost",
"portSecureHostname" : "localhost"
},
After making the change, stop and start the Reports Server as follows:
Linux/Mac:
cd <installroot>/controller/reporting_service/reports/bin
./reports-service.sh stop
./reports-service.sh start
./reports-service.sh list
Windows:
cd <installroot>\controller\reporting_service\reports\bin
reports-service.bat stop
reports-service.bat start
When using the Enterprise Console, starting or stopping the Controller will also start or stop the Reporting Service.
Check to see if the Reporting Service is running with the following command:
./reporting_service/reports/bin/reports-service.sh|bat start
./reporting_service/reports/bin/reports-service.sh|bat stop
View Logs
The Reporting Service uses the following logs in the Controller home directory:
/logs/reporting-server.log. Prints if the report email was sent and details of the report object that was requested by the user.
/reporting_service/reports/logs/reporting-process.log. Confirms the reporting service process started and whether or not
exceptions occurred. Note that this log file is only used on Linux Controllers.
This audit capability creates an audit.log file and is used to monitor user activities and configuration changes in the Controller. Be aware that SaaS
customers do not have access to the audit.log file as it is held on the AppD Controller server. The information is retrieved through the following actions.
You must have account-level permissions to view and configure scheduled reports. Use this report to view changes made to the user information,
controller configuration, and application properties.
a. Note: Custom time range options are available for all the Report Types.
5. Select your report file format as PDF, JSON, or CSV.
a. Optionally, uncheck the Show Diff box to remove the Object Changes column from your report file.
6. Choose the data to include or exclude from the drop-down list.
You can create new, duplicate existing, or modify current reports as well as set an email delivery schedule to a defined list of recipients. You can also
choose the Send Report Now right-click option for an immediate look at the audit details. Review the Reports documentation for more details on other
types of reporting.
UserName ObjectName
ApplicationName
The Controller Audit Log Report is sent by email according to the addresses added to the configurations page. This report captures the following
information:
Format
GET /controller/ ControllerAuditHistory?startTime=<start-time>&endTime=<end-time>&include=<field>:
<value>&exclude=<field>:<value>
For example:
http://localhost:8080/controller/ControllerAuditHistory?startTime=yyyy-MM-dd&&endTime=yyyy-MM-
dd&include=filterName1:filterValue1&include=filterName1:filterValue1&exclude=filterName1:
filterValue1&exclude=filterName1:filterValue1
[{"timeStamp":1450569821811,"auditDateTime":"2015-12-20T00:03:41.811+0000","accountName":"customer1","
securityProviderType":"INTERNAL","userName":"user1","action":"LOGIN"},{"timeStamp":1450570234518,"
auditDateTime":"2015-12-20T00:10:34.518+0000","accountName":"customer1","securityProviderType":"INTERNAL","
userName":"user1","action":"LOGIN"},{"timeStamp":1450570273841,"auditDateTime":"2015-12-20T00:11:13.841+0000","
accountName":"customer1","securityProviderType":"INTERNAL","userName":"user1","action":"OBJECT_CREATED","
objectType":"AGENT_CONFIGURATION"},
...
{"timeStamp":1450570675345,"auditDateTime":"2015-12-20T00:17:55.345+0000","accountName":"customer1","
securityProviderType":"INTERNAL","userName":"user1","action":"OBJECT_DELETED","objectType":"
BUSINESS_TRANSACTION"},{"timeStamp":1450570719240,"auditDateTime":"2015-12-20T00:18:39.240+0000","accountName":"
customer1","securityProviderType":"INTERNAL","userName":"user1","action":"APP_CONFIGURATION","objectType":"
APPLICATION","objectName":"ACME Book Store Application"},{"timeStamp":1450571834835,"auditDateTime":"2015-12-
20T00:37:14.835+0000","accountName":"customer1","securityProviderType":"INTERNAL","userName":"user1","action
Input parameters
To control the size of the output, the range between the start-time and end-time cannot exceed twenty-four hours. For periods longer than 24
hours, use multiple queries with consecutive time parameters.
What is Audited
The following entries are audited:
ACCOUNT HTTP_REQUEST_ACTION
ACCOUNT_ROLE HTTP_REQUEST_ACTION_MEDIA_TYPE_CONFIG
ACTION_SUPPRESSION_WINDOW HTTP_REQUEST_ACTION_PLAN_CONFIG
AGENT_CONFIGURATION HTTP_REQUEST_DATA_GATHERER_CONFIG
ANALYTICS_DYNAMIC_SERVICE_HIERARCHICAL_CONFIGURATION INFO_POINT
APPLICATION JIRA_ACTION
APPLICATION_COMPONENT JMX_CONFIG
APPLICATION_COMPONENT_NODE MEMORY_CONFIGURATION
APPLICATION_CONFIGURATION METRIC_BASELINE
APPLICATION_DIAGNOSTIC_DATA MOBILE_APPLICATION
ASYNC_TRANSACTION_CONFIG NODEJS_ERROR_CONFIGURATION
BACKEND_DISCOVERY_CONFIG NOTIFICATION_CONFIG
BUSINESS_TRANSACTION OBJECT_INSTANCE_TRACKING
BUSINESS_TRANSACTION_CONFIG PHP_ERROR_CONFIGURATION
BUSINESS_TRANSACTION_GROUP POJO_DATA_GATHERER_CONFIG
CALL_GRAPH_CONFIGURATION POLICY
CUSTOM_ACTION PYTHON_ERROR_CONFIGURATION
CUSTOM_CACHE_CONFIGURATION RULE
CUSTOM_EMAIL_ACTION_PLAN_CONFIG RUN_LOCAL_SCRIPT_ACTION
CUSTOM_EXIT_POINT_DEFINITION SCHEDULED_REPORT
CUSTOM_MATCH_POINT_DEFINITION SERVICE_ENDPOINT_DEFINITION
DASHBOARD SERVICE_ENDPOINT_MATCH_CONFIG
DIAGNOSTIC_SESSION_ACTION SMS_ACTION
DOT_NET_ERROR_CONFIGURATION SQL_DATA_GATHERER_CONFIG
EMAIL_ACTION THREAD_DUMP_ACTION
ERROR_CONFIGURATION TRANSACTION_MATCH_POINT_CONFIG
EUM_CONFIGURATION USER
EVENT_REACTOR WORKFLOW
GLOBAL_CONFIGURATION WORKFLOW_ACTION
GROUP
Note that not all of these actions are supported for all of the Audit Entries in the table above.
ACCOUNT_REENABLED OBJECT_CREATED
ACCOUNT_ROLE_ADD_PERMISSION OBJECT_DELETED
ACCOUNT_ROLE_REMOVE_PERMISSION OBJECT_UPDATED
ACKNOWLEDGE_GDPR_DATA_PRIVACY SAML_AUTHENTICATION_CONFIG_CREATED
ANOMALY_DETECTION_CONFIG_CHANGED SAML_AUTHENTICATION_CONFIG_DELETED
FLOW_ICON_MOVED SAML_AUTHENTICATION_CONFIG_UPDATED
GROUP_ADD_ACCOUNT_ROLE USER_ADD_ACCOUNT_ROLE
GROUP_REMOVE_ACCOUNT_ROLE USER_ADD_TO_GROUP
LDAP_CONFIG_CREATED USER_EMAIL_CHANGED
LDAP_CONFIG_DELETED USER_PASSWORD_CHANGED
LDAP_CONFIG_UPDATED USER_PASSWORD_RESET
LOG_LEVEL_CHANGED USER_PASSWORD_RESET_COMPLETED
LOGIN USER_REMOVE_ACCOUNT_ROLE
LOGIN_FAILED USER_REMOVE_FROM_GROUP
LOGOUT
LOGOUT_FAILED
This page provides troubleshooting information for issues that may arise during Controller installation and operation.
<controller_home>/logs/server.log
The first step in troubleshooting Controller issues typically involves checking the log file. Search the log for errors that may correspond to the issue you are
encountering. If found, an error log may help you identify and resolve the issue.
1. The Controller UI performs slowly. For short time ranges, such as 15 or 30 minutes, responses that take longer than 10 to 20 seconds can
indicate that your Controller is under stress.
2. When the Controller's metric reporting lags 7 to 10 minutes behind the current time, it can be an indication that your Controller is under stress. A
lag of about 3 to 5 minutes is normal.
3. When monitoring the Controller environment, you see that CPU, memory, and disk metrics are at about 75% capacity.
If you observe degradation in Controller performance, it may be due to one of the following:
The hardware resources for the Controller might not match the correct Controller profile.
The Controller performance profile may be incorrectly configured.
top (expect to see java and mysql with cpu greater then 0)
By default, the Enterprise Console waits 45 minutes for the Controller app server or database to start. When installing a medium or large profile Controller
or into certain types of environments such as virtual machines, the time it takes to start up the system can exceed the default startup timeout period.
These transaction logs are used to recover any failed Glassfish transactions, so deleting these logs on startup is not advised. Instead, configure your virus
scanners to ignore the entire Controller directory.
All log files for Controller are located in the <Controller_Installation_Directory>/logs folder.
Error receiving metrics (node not This error means the app agent tried to upload metric data for a specific node, but the node does not belong to
properly modeled yet: Could not find any tier. Nodes must belong to tiers and these tiers must belong to a business application in order to receive
component for node. metric data for that node. See Overview of Application Monitoring.
Received Metric Registration request This error indicates that the Controller received a registration request for metrics for a Machine Agent that
for a machine that is NOT registered listed a machine ID not yet associated with any node. Configure the Machine Agent to associate with the
to any nodes. Sending back null! correct application, tier, and node. See Install the Machine Agent.
Agent upload blocked, as its reporting The App Agents attempt to report metric data using Controller time. The agents retrieve the time from the
a time well into the future. Controller every five minutes and report times using a skew of the local machine time, if different.
If for some reason the App Agent reports metrics that are time-stamped ahead of the Controller time, the
Controller rejects the metrics. To avoid this event, ensure that the system times for the machine on which the
Controller is running and the machines for the app agents are in synchronization.
If you encounter this log entry, make sure that you have allocated sufficient swap space on the Controller machine. AppDynamics recommends allocating
a minimum of 10 GB of swap space.
See the documentation for your Linux distribution for recommendations on the value for the swappiness parameter. For example, RedHat recommends
setting swappiness to 10 for CentOS and RedHat kernels version 2.6.32-303 or later if you encounter OOM issues even though swap space is still
available.
Before you configure the swappiness parameter though, ensure that the machine has sufficient RAM and that the buffer pool size for MySQL is properly
configured.
For example, add the following line to set the swappiness parameter to 10.
3. Set the swappiness parameter in the /etc/sysctl.conf file to the same value you used in step 2.
vm.swappiness = 10
Could not determine the IP address of this host error during installation
During the installation process, the Enterprise Console attempts to ping the Controller by the hostname or IP address you enter. If the ping is unsuccessful
during the user input validation, the following error message appears: "Could not determine the IP address of this host. Please ensure that the IP address
of the Controller host resolves to its hostname or to localhost. You may need to add an entry in the hosts file on the Controller host and retry the
operation."
To make the hostname resolvable, add an entry for it to the hosts file on the machine on which you are installing the Controller. On Linux, the hosts file is
typically at /etc/hosts. On Windows, look for the file at the following location, C:\Windows\System32\Drivers\etc\hosts, or the location
appropriate for your version of Windows.
For example, the following shows the entry added as the third line of the default RedHat hosts file:
If you encounter this error, verify that the Controller database is running properly. On Linux, you can do so using one of the following commands:
lsof -i:3388 SysInternals Process Explorer, will provide a list of files List open files opened by process with pid 3388.
opened by process with pid 3388.
netstat -anp | netstat -ano | find "3388" List all networking ports opened by process with pid 3388.
grep 3388
If no processes are found, it indicates that the Controller database was incorrectly terminated. Start the Controller database again and verify the Controller
server.log file for any error messages.
On Linux, run:
On Windows, open an elevated command prompt (in the Windows start menu, right-click the Command Prompt icon and choose Run as
Administrator) and run:
The logs will be copied in the Enterprise Console host under platform-admin/logs-controller-<platform-name>-<date-time-
stamp>.zip.
See Platform Log Files to learn how to manage your Controller logs.
Submit all platform-admin/logs/* and platform-admin/logs-controller-*.zip, in particular the server.log files. You can also
use the log file utility described in Triggering automatic collection of Controller logs to collect logs.
If the Controller runs out of memory, it generates a heap dump. Submit all files in <controller_home>/appserver/glassfish/domains
/domain1/config/hprof.
Submit all <controller_home>/appserver/glassfish/domains/domain1/config/gc.log files.
Submit information about the hardware and operating system configuration of the machine that is currently hosting the Controller, including
operating system, bit version, CPU cores, clock speed, disk configuration, and RAM.
Indicate the Performance profile of Controller. Run the controller diagnosis command which captures the information in platform-admin-
server.log:
Refer to the Controller diagnostic data in the platform-admin-server.log. See a sample Controller diagnostic data on Manage a High
Availability Deployment page.
Issues Generating Audit Reports Immediately after Upgrading the Controller to 4.5
When the Controller upgrade is complete, audit reports may not work immediately. The audit database table is getting migrated only after the upgrade
process and the migration takes at least an hour to complete. If audit reports are run before completing the migration process, audit table migration
messages are logged in the server.log file.
No actions are required, try running the audit reports again after an hour.
The following steps describe how to collect troubleshooting information for your Controller. You may be requested for the information when troubleshooting
with the AppDynamics support team.
Get the heap dump before garbage collection using the following command:
Get the histogram before garbage collection using the following command:
Get the histogram after garbage collection using the following command:
kill -3 <Controller_pid>
heap_before_live.bin
histo_before_live.txt
histo_after_live.txt
jvm.log
This page describes how to check version information and the version of bundled components. This information is useful when troubleshooting the system
or performing other administrative tasks.
AppDynamics maintains and updates the bundled components as part of the AppDynamics platform. Do not attempt to upgrade a bundled
component independently of the platform upgrade procedure.
Controller Version
You can retrieve the Controller version in two ways:
<controller_home>/bin/controller.sh login-db
select version();
A newly installed 4.5 Controller packages and uses MySQL 5.7. However, a Controller that is upgraded to 4.5 from a previous version where
MySQL 5.5 is used, will also use version 5.5.
You can upgrade the MySQL version on the Controller page in the GUI or with the following command:
Related pages:
You can upgrade a Controller instance using the Enterprise Console. The Enterprise Console simplifies the upgrade process by allowing you to discover
and upgrade single Controllers and HA-pairs.
The Enterprise Console supports standalone and HA-pair Controller upgrades. Use the following table to determine your course of action
based on your circumstances:
>= 21.x Controller version 21.x or the latest 1. Access the downloads portal to download the
version Controller version.
2. Use the Controller installer and upgrade the
Controller.
You can choose which version you would like to upgrade the Controller to as long as the Enterprise Console is aware of that version.
This means that you can upgrade the Controller to any intermediate version or to the latest version as long as the Enterprise Console
installer has been run for those versions. However, you cannot upgrade the Controller to an older version.
If you have a license for the older version, the license should work when upgrading the Controller to a new version. However, if you have
a temporary license for the old version and now have a new license, the new license will not work on the old Controller. In this case, you
should upgrade the Controller to the latest version before applying the license.
An upgrade results in Controller downtime, but it is not necessary to stop agents during the Controller upgrade.
The Enterprise Console expects a .passwordfile file to be present in the Controller home directory. The Enterprise Console reads
this password and validates it against the Controller. Once the upgrade is complete, the Enterprise Console removes the file, and stores
the password in its encrypted database.
If you change the Glassfish Admin Password manually, you also need to update it in the Enterprise Console Controller
Settings.
Before Upgrading
Before you start upgrading the Controller, make sure that you are using the correct update order.
Review the latest Release Notes and the release notes for any intermediate versions between the current version of your instance and
the version you are targeting to learn about issues and enhancements in those releases.
Check the most recent Controller System Requirements and Troubleshoot Controller Issues to review your Controller's current workload
and determine whether you need to change your performance profile and increase your hardware resources, if necessary.
You may change your Controller profile on the Platform Configurations pages of the Enterprise Console GUI, either before or
after you upgrade your Controller. This process is not reversible, and you cannot move from a larger to a smaller profile size.
Check the Controller's database.log for any errors. You can find the log at <controller_home>/db/logs/database.log. There
should not be any InnoDB: Error lines in the log. If any errors are found, please reach out to AppDynamics Support before attempting
the upgrade. Upgrading the Controller with a corrupt database may put the Controller in a bad state, with high recovery time.
If you changed any Glassfish settings that are not JVM options, note your changes. You may need to configure them after an upgrade.
The Enterprise Console recognizes and retains many common customizations to the domain.xml, db.cnf, and other configuration files,
but is not guaranteed to retain them all. If you have made manual configuration changes to the files, verify the configuration after
updating. See Retaining Configuration Changes to learn how to preserve changes.
If you uninstall the Enterprise Console that was previously managing the Controller and use a new Enterprise Console instance to
discover and upgrade the Controller, you need to first manually create the .passwordfile file in order for the Enterprise Console to
continue with the discover and upgrade process. You can create the file in the Controller home directory, and add the AS_ADMIN_PASSW
ORD=<controllerRootUserPassword> value in it.
The Enterprise Console does not back up MySQL data. You need to back up the data before upgrading standalone installations.
If the upgrade does not finish successfully for any reason, see Troubleshooting the Upgrade for more information.
platform-admin.sh stop-controller-appserver
2. Back up the Controller home by copying the entire Controller home directory to a backup location. Note the following points:
If the data home for the Controller is not under the Controller directory, be sure to back up the database directory as well.
If it's not possible to back up the entire data set, you can selectively back up the most important tables. Use the Metadata
Backup SQL script described and attached to the Controller Data Backup and Restore page.
3. Restart your Controller after completing the backup and before upgrading.
After Upgrading
If you have configured settings in the domain.xml file, db.cnf or other configuration files manually or by using the Glassfish asadmin
utility, verify those changes in the configuration files or Controller Configurations page of the GUI after the upgrade and re-apply any
customizations that were not preserved. The Enterprise Console preserves several recommended customizations. After the upgrade,
you can find backup copies of common configuration files in the <controller_home>/backup directory.
As an optional step, your MySQL version can be upgraded after you upgrade the Controller, through the Enterprise Console GUI or by
using the mysql-upgrade job. See Bundled MySQL Database Version for more information.
To troubleshoot the issue, check the installation log at platform-admin/logs/platform-admin-server.log. The Controller server.log f
ile may contain additional information.
You can also resume a failed Controller job from the CLI by passing the flag useCheckpoint=true as an argument after --args in your
command.
If for some reason, the upgrade from the last checkpoint is not successful, you may retry the upgrade from the beginning. Simply click R
etry instead of Resume from the checkpoint. However, please note that retrying from the beginning after a failed upgrade may
overwrite your customizations to db.cnf and domain.xml.
You can increase the default time out period for system startup. The timeouts are defined in platform-admin/archives/controller
/<version>/playbooks/controller.groovy. You can update the controllerStartRetryTimeout = 10 * 60 seconds = 10
minutes, and then retry the upgrade from the checkpoint.
When the Controller upgrade is complete, audit reports may not work immediately. The audit database table is getting migrated only
after the upgrade process and the migration takes at least an hour to complete. If audit reports are run before completing the migration
process, audit table migration messages are logged in the server.log file. No actions are required, try running the audit reports again
after an hour.
Related pages:
You can use the Enterprise Console to onboard and upgrade a single node Controller instance. The Custom Install Discover & Upgrade option in the
GUI allows you to create a platform and discover a Controller.
Alternatively, if you have already created a platform, you must add credentials and hosts to the platform before you can perform discovery.
Then, discover the Controller on the page. Discovering a Controller means that the Enterprise Console learns about your existing Controller deployment,
such as profile, tenancy mode, existing domain configuration, and database configuration. This information is used to perform an upgrade.
Ensure that the Controller and database are running prior to the upgrade. The Enterprise Console validates the database root password and
Controller root passwords provided during the upgrade.
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Upgrade the Enterprise Console to the latest version.
3. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
The list is populated by versions that the Enterprise Console is aware of. Of those versions, the list will only show versions that are the
same or greater than the current Controller version.
Ensure that the Controller and database are running prior to the upgrade. The Enterprise Console validates the database root password and
Controller root passwords provided during the upgrade.
If your upgrade fails, you can resume by passing the flag useCheckpoint=true as an argument after --args.
As a result, an Enterprise Console job is generated where you can verify the success of the password reset.
Downtime is required to change the Controller Database user password. If you have installed a Controller HA pair, you must disable auto-
failover to avoid an accidental failover while changing the password. For more details, refer to the Automatic Failover section.
1. Log in to the database as the root user by running the following command:
2. Execute the following queries, replacing <new_password_here> before executing the query.
3. Verify the login by running the following command in the <controller_home>/db/bin directory:
4. Update the password alias by using the below command in the <controller_home>/appserver/glassfish/bin directory:
Enter the user name as admin, the admin password as Controller root user password, and the alias password as the Controller Database user
password.
In the case of a Controller HA pair, follow steps 1-3 on the secondary controller server. Then, copy the file domain-passwords found
at <controller_home>/appserver/glassfish/domains/domain1/config from the primary to the secondary controller
server.
Related pages:
You can use the Enterprise Console to onboard and upgrade an HA Controller pair. For HA pairs that are not managed by the Enterprise Console, use the
discover and upgrade option; for HA pairs that are managed by the Enterprise Console, you must use the upgrade option.
Perform a set of pre-upgrade validations. A summary of validation errors is provided to you before you modify the system's state.
Quickly restore the older Controller version from the preserved secondary version in case of any upgrade issues. As you upgrade the primary
server, the secondary server is isolated, thereby providing you a backup from which to quickly restore the service.
If the Controller services are installed with privilege escalation using setups option (-c option in install-init.sh) then running the following command
on the secondary will stop the secondary appserver, watchdog, and assassin.
or
If the Controller services are installed with privilege escalation using sudoers option (-s option in install-init.sh) then running the following
command on the secondary will stop the secondary appserver, watchdog, and assassin.
or
For both stopping and checking the statuses, if you do not remember the privilege escalation method used to install services, then you can use both
variants, one after the other, in any order.
If are using the HA ToolKit (HATK) and made any customizations to it, AppDynamics recommends that you review your particular situation and
determine if you should proceed with the migration. For more details, see HA module in the Enterprise Console.
Steps 1 through 3 consist of different procedures depending on your HA Controller pair deployment:
Follow Option 1 - Discover and Upgrade for deployments that are not managed by the Enterprise Console.
Use the discover and upgrade job to onboard your HA Controller pair to the Enterprise Console before upgrading the primary
Controller.
Follow Option 2 - Upgrade for HA pair deployments that are managed by the Enterprise Console.
Use the upgrade option to upgrade your primary Controller managed by the current Enterprise Console instance.
Sample Output
Check that all hosts show the new versions, that the Controller is running on the primary and stopped on the secondary, and that MySQL is running on
both hosts. If you are not satisfied with the upgrade, see Verify the Primary Upgrade is Unsatisfactory.
If the secondary Controller upgrade fails, see Upgrade the Secondary Controller Fails for possible recovery options.
Sample Output
Ensure that the secondary host version has been upgraded. Also, the primary should be in a running state, while the secondary Controller appserver
should remain stopped. However, its MySQL process should be running.
Failover Issues
If you experience failover issues before upgrading, determine the condition of the secondary Controller. You may need to fix a broken secondary Controller
before you attempt an upgrade.
If the downtime maintenance window is closing, you need to restore the older deployment version and service. Due to the recently performed failover, the
current secondary host is a known-good host because it had been functioning as the primary host. You can quickly restore service by failing over to it, and
then repairing the host (which experienced the failed upgrade) by rebuilding it as a secondary host.
1. Enter the ha-failover command to revert the primary host to the older version:
2. Enter the incremental-replication command to reduce downtime. When dealing with large data, this is recommended because replication
time and downtime depend on data size.
Controller: secondary_upgrade_error
Option 1: If the failure is recoverable, you can retry the upgrade by entering the following command on the Enterprise Console host:
Option 2: If retrying the upgrade does not work, you can run the incremental-replication and finalize-replication jobs. This involves downtime on
the primary. Enter the following commands on the Enterprise Console host:
1. You are currently running version 4.5.13 of both Enterprise Console and Controller.
2. You activate the HA modules.
3. You then perform an upgrade only for the Enterprise Console to the latest version (4.5.15 or later).
4. You can upgrade the HA modules from either the Enterprise Console UI or the CLI:
From the CLI, run the following command on the Enterprise Console host to upgrade the HA modules:
The HA modules are upgraded to the latest version without upgrading the Controller. When you upgrade the HA modules, no downtime is required on the
Controller, and all HA settings are preserved.
You can use the HA Controller Upgrade Wizard once both of your Controllers (primary and secondary) have been onboarded into the Enterprise Console.
If are using the HA ToolKit (HATK) and made any customizations to it, AppDynamics recommends that you review your particular situation and
determine if you should proceed with the migration. For more details, see HA module in the Enterprise Console.
Upgrading the HA Controller pair consists of different procedures based on your deployment:
Follow Option 1 - Discover and Upgrade for deployments that are not managed by the Enterprise Console.
Use the discover and upgrade job to onboard your HA Controller pair to the Enterprise Console before upgrading the primary
Controller.
Follow Option 2 - Upgrade for HA pair deployments that are managed by the Enterprise Console.
Use the upgrade option to upgrade your primary and secondary Controllers managed by the current Enterprise Console instance.
Before you proceed with upgrading the secondary Controller, AppDynamics recommends that you verify that the Controller is working by
logging in to the Controller and checking that metrics have been received within the last five minutes. If you are not satisfied with the upgrade,
you can roll back to the older version from which you started.
The following procedure only applies to those Controllers that have been upgraded from version 4.2.8 or earlier, to a later version. In such
cases, after you upgrade to later version, you may experience performance issues with the Controller database. However, for all new Controller
installations using version 4.2.9 or later, the metric data tables are optimized.
To improve database performance when querying metrics, the primary key used by the metric data tables is read optimized. As a result, the primary key
changes as follows:
From To
Select Start Database Optimization from the Controller page to start a process that runs in the background on your primary Controller host.
The process performs several pre-checks to determine if there is enough disk space, and if any other database optimization process is running.
The amount of disk space required is determined by the size of the tables to optimize. Based on the amount of Controller data, the database
optimization job may take several hours to several days to complete.
4. Once all of the tables have been optimized successfully, the database optimization process completes and no longer displays on the page. To
verify that all tables have been optimized, enter and run the following query:
If the query returns any results, then those tables have not been optimized.
If the query returns zero records, then all of the tables were optimized successfully.
If both of the Controllers are onboarded into Enterprise Console, review the Controller page and note the following fields:
cd <controller_home>/bin directory
./controller.sh login-db
c. Enter:
Seconds_Behind_Master: $Number_Of_Seconds_Behind_Master
If a non-zero number displays the output for this test, wait until the number changes to zero.
d. After you ensure that replication is working as expected, you can run the database optimization job from the Enterprise Console. Select S
tart Database Optimization from the Controller page to start a process that runs in the background on your primary Controller host.
The process performs several pre-checks to determine if there is enough disk space, and if any other database optimization process is
running. The amount of disk space required is determined by the size of the tables to optimize. Based on the amount of Controller data,
the database optimization job may take several hours to several days to complete.
e. Once all of the tables have been optimized successfully, the database optimization process completes and no longer displays on the
page. To verify that all tables have been optimized, enter and run the following query:
If the query returns any results, then those tables have not been optimized.
If the query returns zero records, then all of the tables were optimized successfully.
You may need to stop the database optimization process if it is using too many resources and you notice a performance impact on the
Controller, or if you decide to reschedule the process to run at a later date.
Job failed; Database replication is broken message displays. Re-establish database replication incrementally, then finalize
replication.
Ran out of disk space while the database optimization job was running, and job stops Free up disk space and restart database optimization job.
processing.
This page describes how to uninstall the Controller software and associated files from a platform using the Enterprise Console.
Before Starting
If you have installed the Events Service with the Enterprise Console, it is recommended that you uninstall the Events Service before you uninstall the
Controller. See Uninstall the Events Service for more information.
In addition, if you have the EUM Server, Application Analytics, or other product modules installed, keep in mind that if you reinstall the Controller later, you
will need to configure integration settings for the modules manually.
Optionally, stop the Controller before uninstalling as described in Start or Stop the Controller. If you do not stop the Controller, the uninstaller will do so for
you. However, if your database or Controller generally take a long time to shut down, you can avoid the possibility of time-out errors during uninstallation
by stopping the services manually.
1. Open a console:
On Linux, open a terminal window and switch to the user who installed the Controller or to a user with equivalent directory permissions.
On Windows, open an elevated command prompt by right-clicking on the Command Prompt icon in the Windows Start Menu and
choosing Run as Administrator.
2. From the command line, navigate to the Enterprise Console bin directory, platform-admin/bin.
3. Run the following command:
Note that you cannot use the other AppDynamics platform components without a Controller, so you must install a new Controller before you can resume
using the platform.
The default End User Monitoring deployment assumes that EUM agents (Mobile and Browser) send their data to the EUM Cloud, a cloud-based processor.
To deploy EUM completely on-premises, you need to install the EUM Server, the on-premises version of the EUM Cloud, as described here.
Installation Overview
The EUM Server receives data from EUM agents, processes and stores that data, and makes it available to the AppDynamics Controller. Certain EUM
features—specifically, Browser Request Analytics and Mobile Request Analytics, features of Application Analytics that extend the functionality of Browser
and Mobile Analyze—require access to the AppDynamics Events Service.
To set up a complete on-premises EUM Server deployment, therefore, you need to:
1. Determine which version of the EUM Server is compatible with your other platform components.
2. Install the on-premises Controller or prepare an in-service Controller to work with the EUM Server
3. Install the on-premises Events Service Deployment and configure it to work with your on-premises Controller
4. Install the on-premises EUM Server and configure it to work with your Events Service and Controller.
In Demo mode, the EUM Server listens for connections on port 7001 or 7002. The secure port, 7002, uses a built-in, self-signed certificate, which is only
used in demo mode.
In a production environment, the EUM Server is likely to operate behind a reverse proxy. A reverse proxy relieves the performance burden of SSL
termination from the EUM Server. It also helps ease certificate management and security administration in general. Further, as the connection point for
agent beacons, the Server needs to have the security layer of a proxy between itself and the external Internet.
1. Check your Controller version. The 4.5 EUM Server works with the AppDynamics Controller version 4.5 or earlier. A Controller works with a EUM
Server that is the same or a later version, which includes the major, minor, and patch version. Thus, a version 4.5.2 Controller works with a 4.5.2
or later version of the EUM Server, but that version 4.5.2 Controller does not work with a 4.5.1 or 4.5.0 version of the EUM Server. See Upgrade
the Production EUM Server on information about upgrading the platform.
2. Back up the current version of your Controller.
3. Choose a time window that has minimum impact on service availability.
If you are running on-premises and wish to keep all your processing on-premises, after installing and configuring the Controller, you must install an on-
premises version of the Events Service as described in this section. Note that relying on the Events Service purely for use with the EUM UI does not
require a separate Application Analytics license. Other uses may require a separate license.
There are multiple modes of deploying the Events Service. For detailed information on installing and configuring the Events Service, see Events Service
Deployment.
Run the EUM installer under the same user account on the target machine like the one used to install the Controller, or using an account that has read,
write, and execute permissions to the Controller home directory. Installing with incompatible permission levels—for example, attempting to install the EUM
Server as a regular user while the Controller was installed by root user—may result in installation or operation errors.
The EUM Server is automatically installed as a Windows service. All upgrades are automatically converted to a Window service.
GUI
Console
Silent mode with varfile
See the following page for details on installing as appropriate for your deployment mode:
bin/eum.sh start
On Windows, if you ever need to start the EUM Server manually, you can do so by running:
bin\eum-processor.bat start
You can check if the server is running and accessible by going to http://<hostname>:7001/eumaggregator/ping with your browser. Your browser
should display ping.
To stop the EUM Server, pass the stop command to the eum script. For example, on Linux, from the eum-processor directory, run:
bin/eum.sh stop
You can also start and stop the EUM database. On Windows, you can do so from the Windows Services.
Related pages:
This page lists the EUM Server requirements, offers sizing guidance, and shows you how to use configuration to modify the default settings. For additional
EUM Processor sizing information, see Analytics' Recipe Book for on-prem configuration in the AppDynamics Community.
Hardware Requirements
The requirements and guidelines for the EUM Server machine (basic usage) are as follows:
Minimum 50 GB extra disk space. See Disk Requirements Based on Resource Timing Snapshots to learn when more disk space is needed.
64-bit Windows or Linux operating system
Processing: 4 cores
10 Mbps network bandwidth
Minimum 8 GB memory total (4 GB is defined as max heap in JVM). See RAM Requirements Based on the Beacon Load to learn when more
RAM is required.
NTP enabled on both the EUM Server host and the Controller machine. The machine clocks need to be able to synchronize.
A machine with these specs can be expected to handle around 10K page requests a minute or 10K simultaneous mobile users. Adding on-
premises Analytics capability requires increasing these requirements—particularly disk space—considerably, depending on the use case.
The table below specifies the required RAM based on your beacon load per minute and lists the content of a typical beacon.
~3K 8 GB
600 sessions
1K base pages
2K virtual pages
7K Ajax requests
~16K 16 GB
1.8K sessions
5K base pages
10K virtual pages
40K Ajax requests
~26K 16 GB
3.6K sessions
8K base pages
17K virtual pages
62 Ajax requests
~33K 32 GB
3.9K sessions
10K base pages
20K virtual pages
74K Ajax requests
Because of the number of resource timing snapshots impact disk usage, you should follow the guidelines in the table below.
~500 40 GB
~1000 64 GB
~1500 96 GB
~2000 128 GB
If needed, you can reduce the number of resource timing snapshots or reduce the disk space allotted for storing resource snapshots by doing one or more
of the following:
Configure the JavaScript Agent to modify and limit the number of resources to monitor.
Use the EUM Server configuration onprem.resourceSnapshotAllowance to specify the maximum disk space allotted for storing resource
snapshots. See EUM Server Configuration File for a complete list of configurations.
Limit the number of snapshots retained by the EUM server by setting a global maximum, reducing the time that they are retained, or by filtering
snapshots based on the network response time. See Limit the Number of EUM Snapshots for instructions.
Filesystem Requirements
The filesystem of the machine on which you install EUM should be tuned to handle a large number of small files. In practical terms, this means that either
the filesystem should be allocated with a large number of inodes or the filesystem should support dynamic inode allocation.
Controller Version
The AppDynamics Platform you use with the EUM server must have a supported Controller version installed. Controllers only work with the same or later
versions of the EUM Server. For example, the 4.5 EUM Server works with the AppDynamics Controller version 4.5 or earlier.
See "Configure User Limits in Linux" below for information on how to check and set user limits.
Warning in database log: "Could not increase number of max_open_files to more than xxxx".
Warning in server log: "Cannot allocate more connections".
To check your existing settings, as the root user, enter the following commands:
ulimit -S -n
ulimit -S -u
The output indicates the soft limits for the open file descriptor and soft limits for processes, respectively. If the values are lower than recommended, you
need to modify them.
If your system has a /etc/security/limits.d directory, add the settings as the content of a new, appropriately named file under the
directory.
If it does not have a /etc/security/limits.d directory, add the settings to /etc/security/limits.conf.
If your system does not have a /etc/security/limits.conf file, it is possible to put the ulimit command in /etc/profile. However,
check the documentation for your Linux distribution for the recommendations specific for your system.
1. Determine whether you have a /etc/security/limits.d directory on your system, and take one of the following steps depending on the
result:
If you do not have a /etc/security/limits.d directory:
a. As the root user, open the limits.conf file for editing: /etc/security/limits.conf
b. Set the open file descriptor limit by adding the following lines, replacing <login_user> with the operating system username
under which the EUM Server runs:
When you log in again as the user identified by login_user, the limits will take effect.
Network Settings
The network settings on the operating system need to be tuned for high-performance data transfers. Incorrectly tuned network settings can manifest
themselves as stability issues on the EUM Server.
The following command listing demonstrates tuning suggestions for Linux operating systems. As shown, AppDynamics recommends a TCP/FIN timeout
setting of 10 seconds (the default is typically 60), the TCP connection keepalive time to 1800 seconds (reduced from 7200, typically), and disabling TCP
window scale, TCP SACK, and TCP timestamps.
The commands demonstrate how to configure the network settings in the /proc system. To ensure the settings persist across system reboots, be sure to
configure the equivalent settings in the etc/sysctl.conf, or the network stack configuration file appropriate for your operating system.
Required Libraries
libaio
tar
libaio Requirement
Install libaio on the host machine if it does not already have it installed. The following table provides instructions on how to install libaio for some
common flavors of the Linux operating system.
Red Hat and CentOS Use yum to install the library, such as:
Debian Use a package manager such as APT to install the library (as described for the Ubuntu instructions above).
tar Requirement
You will need the tar utility to unpack the EUM Server installer.
Install tar on the host machine if it does not already have it installed. The following table provides instructions on how to install tar for some common
flavors of the Linux operating system.
Red Hat and CentOS Use yum to install the library, such as:
Debian Use a package manager such as APT to install the library (as described for the Ubuntu instructions above).
The GUI and silent installation methods are described below. To start the installer using interactive console mode, start the installer with the -c switch. The
console mode prompts you for the equivalent information that appears in the GUI installer screens.
Additionally, you can run the installer using a Response file (for unattended installations). See Installing with the Silent Installer.
Requirements
Before starting, download the installer distribution and extract it on the target machine. You obtain the EUM installer from the AppDynamics
Download Center.
To secure connections from agents to the EUM Server, AppDynamics strongly recommends that SSL traffic is terminated at a reverse proxy that
sits in front of the EUM Server in the network path, and forwards connections to the EUM Server. However if this is not possible in your
installation, it is possible to connect with HTTPS directly to the EUM Server. For information on setting up a custom keystore for production, see S
ecure the EUM Server.
If you install and configure the Events Service with HTTPS support, you must perform a workaround for your EUM Server installation to
complete properly. After the Events Service certificate configuration, install the EUM Server without Analytics enabled. Then, install the
certificate into the EUM Server keystore following the steps described on the Secure the EUM Server page. Configure Analytics in the
Events Services Properties, and restart the EUM Server.
Before you install the EUM Server, Linux systems must have the libaio library installed. See the EUM Server Requirements.
Install the EUM Server for a Production Deployment with the GUI Installer
Run the on-premises EUM installer on the machine on which you want to install the EUM Server.
c.
Usernames and passwords can only consist of ASCII characters. In addition, passwords cannot include the characters '^', '/',
or '$'.
This completes the initial configuration and setup of the EUM Server. When finished, the EUM Server is running.
Post-Installation Tasks
To complete the AppDynamics EUM Server installation, you must perform these additional post-installation tasks (as shown in the last AppDynamics End
User Monitoring Setup Wizard screen):
Sample
analytics.enabled=true
analytics.serverScheme=http
analytics.serverHost=hostname-events-service (needs to be the hostname of your Events Service)
analytics.port=9080
analytics.accountAccessKey=1a59d1ac-4c35-4df1-9c5d-5fc191003441
The <analytics.accountAccessKey> is the Events Service key that appears as the appdynamics.es.eum.key value in the
Administration Console:
1. Create a file named response.varfile on the machine on which you will run EUM installer and include the following:
sys.adminRights$Boolean=false
sys.languageId=en
sys.installationDir=/AppDynamics/EUM
euem.InstallationMode=split
euem.Host=eumhostname
euem.initialHeapXms=1024
euem.maximumHeapXmx=4096
euem.httpPort=7001
euem.httpsPort=7002
mysql.databasePort=3388
mysql.databaseRootUser=root
mysql.dbHostName=localhost
mysql.dataDir=/usr/local/AppDynamics/EUM/data
mysql.rootUserPassword=singcontroller
mysql.rootUserPasswordReEnter=singcontroller
eumDatabasePassword=secret
eumDatabaseReEnterPassword=secret
keyStorePassword=secret
keyStorePasswordReEnter=secret
eventsService.isEnabled$Boolean=true
eventsService.serverScheme=http
eventsService.host=eventsservice_host
eventsService.port=9080
eventsService.APIKey=1a234567-1234-1234-4567-ab123456
2. Modify values of the installation parameters based on your own environment and requirements. Particularly ensure that the directory paths and
passwords match your environment.
3. Run the installer with the following command:
You can run the installer in one of three modes. The GUI and silent installation methods are described below. To start the installer in interactive console
mode, start the installer with the -c switch. The console mode prompts you for the equivalent information that appears in the GUI installer screens, as
described below.
In addition, you can run the installer using a Response file (for unattended installations). See Installing with the Silent Installer.
If you do not already have an existing on-premises Controller, install it as described in Custom Install.
Installation Requirements
To install the Demo Installation, you are required to do the following:
Install and run a Controller instance on the same host machine before starting the EUM Server installation.
Install the EUM Server with the same user account used to install the Controller, or use an account that has read, write, and execute permissions
to the Controller home directory.
a. From a command prompt, navigate to the directory to which you downloaded the EUM Server installer.
b. Change permissions on the downloaded installer script to make it executable, as follows:
./euem-64bit-linux-4.5.x.x.sh
On Windows:
a. Open an elevated command prompt (run as administrator) and navigate to the directory to which you downloaded the EUM
Server installer.
b. Run the installer:
euem-64bit-windows-4.5.x.x.exe
Usernames and passwords can only consist of ASCII characters. In addition, passwords cannot include the characters '^', '/', or '$'.
Post-installation Tasks
After installing the EUM server, you must perform three additional post-installation tasks:
1. Create a file named response.varfile on the machine on which you will run EUM installer with the following:
sys.adminRights$Boolean=false
sys.languageId=en
sys.installationDir=/AppDynamics/EUM
euem.InstallationMode=demo
euem.Host=controller
euem.initialHeapXms=1024
euem.maximumHeapXmx=4096
euem.httpPort=7001
euem.httpsPort=7002
mysql.databasePort=3388
mysql.databaseRootUser=root
mysql.dbHostName=localhost
mysql.dataDir=/usr/local/AppDynamics/EUM/data
mysql.rootUserPassword=singcontroller
mysql.rootUserPasswordReEnter=singcontroller
eumDatabasePassword=secret
eumDatabaseReEnterPassword=secret
keyStorePassword=secret
keyStorePasswordReEnter=secret
2. Modify values of the installation parameters based on your own environment and requirements. Particularly ensure that the directory paths and
passwords match your environment.
3. Run the installer with the following command:
On Windows, use:
Related pages:
This page describes how to provision EUM licenses for single-tenant and multi-tenant Controllers.
Setup Requirements
Deploy an on-premises AppDynamics Controller
Set up multi-tenancy on the Controller
3. Provision each license, one at a time, on the EUM Server by running the following command:
If you use HTTPS connections in a production (split host) EUM Server installation, use a custom RSA security certificate for the EUM server. This page
describes how to create an RSA security certificate, change the password for the credential keystore, and how to obfuscate a password for the security
certificate keystore.
For Mobile Real User Monitoring, if you use the default or another self-signed certificate on your EUM Server for testing, you may receive the following
error: "The certificate for this server is invalid". Ensure that your self-signed certificate is trusted by the simulator or device you use for testing. In real-world
scenarios, a CA signed certificate should be used since a self-signed certificate needs to be explicitly trusted by every device that reports to your EUM
processor.
To secure the EUM server with a custom certificate and keystore, generate a new JKS keystore and configure the EUM Server to use it.
The following instructions describe how to create a JKS keystore for the EUM Server with a new key-pair or an existing key-pair. Alternatively, you can
also configure the EUM server to use an existing JKS keystore.
The instructions demonstrate the steps with the Linux command line, but the commands are similar to the commands used for Windows. Make sure to
adjust the paths for your operating system.
1. Create a new certificate and keystore (1a) or import an existing certificate into a keystore (1b).
2. Configure the EUM Server to use the keystore.
3. Restart and test the new keystore.
cd <appdynamics_home>/EUM/eum-processor
2. Create a new keystore with a new unique key pair that uses RSA encryption:
../jre/bin/keytool -genkey -keyalg RSA -validity <validity_in_days> -alias 'eum-processor' -keystore bin
/mycustom.keystore
This creates a new public-private key pair with an alias of 'eum-processor'. You can use any value you like for the alias.
The "first and last name" required during the installation process becomes the common name (CN) of the certificate. Use the name of
the server.
This generates a certificate signing request based on the contents of the alias, in the example 'eum-processor'. You should send the output
file (/tmp/eum.csr, in the example) to a Certificate Authority for signing. After you receive the signed certificate, proceed as follows.
6. Install the certificate for the Certificate Authority used to sign the .csr file:
This command imports your CA's root certificate into the keystore and stores it in an alias called myorg-rootca.
7. Install the signed server certificate as follows:
This command imports your signed certificate over the top of the self-signed certificate in the existing alias, in the example, 'eum-processor'.
8. Import the root certificate from step 6 to the Controller truststore:
cd <appdynamics_home>/EUM/eum-processor
bin/eum.sh stop
mv <keystore>.jks <keystore>.jks.old
4. Import the private and public key for your certificate into a PKCS12 keystore:
This command creates a JKS keystore with the name specified in the -destkeystore property.
6. Specify a password for the keystore. Use this password when you configure EUM to use the new keystore.
processorServer.keyStoreFileName=mycustom.keystore
5. Configure the password for the keystore. You can add the password to the file either in plain text or in the obfuscated form:
For a plain text password, add the password as the value for this property:
processorServer.keyStorePassword=mypassword
processorServer.keyStorePassword=<obfuscated_key>
processorServer.useObfuscatedKeyStorePassword=true
bin/eum.sh stop
bin/eum.sh start
2. Verify the new security certificate works by opening the following page in a browser:
https://<hostname>:7002/eumcollector/get-version
cd <appdynamics_home>/EUM/eum-processor
The sample command creates the password for the default demo keystore, ssugg.keystore. In your command, use the name of your own
keystore as the value for -keystore.
3. Enter the existing password and new password when prompted.
4. Get the obfuscated key by running the following command in the eum-proccessor directory:
processorServer.keyStorePassword=<obfuscated_key>
7. If you did not previously use an obfuscated password, add the following property:
processorServer.useObfuscatedKeyStorePassword=true
Note that completing these procedures requires a restart of the EUM Server.
cd <appdynamics_home>/EUM/eum-processor
2. Generate a credential store with the new key using the following command:
On Linux:
On Windows:
On Windows:
The command prints out the encrypted form of the DB_password value you entered.
4. Copy the output from the previous command to your clipboard.
5. Open bin/eum.properties for editing, and replace the value of the onprem.dbPassword setting with the new encrypted password you
copied to your clipboard.
6. Obfuscate the new credential key as follows:
On Linux:
On Window:
7. Copy the output of the previous command to your clipboard and in eum.properties replace the value of onprem.credentialKey with the
value from your clipboard.
8.
cd <appdynamics_home>/EUM/eum-processor
2. Encrypt the new database password using the credential key which you entered during installation:
On Linux:
On Windows:
The command prints out the encrypted form of the DB_password value you entered.
3. Copy the output from the previous command to your clipboard.
4. Edit bin/eum.properties and replace the value of the onprem.dbPassword setting with the new encrypted password you copied to your
clipboard.
5. Save and close the properties file.
6. Restart the EUM server.
This page describes administration and advanced configuration options for the EUM Server.
To change the maximum length of the page URL read by the EUM Processor:
beaconReader.maxUrlLength=<max_length>
To keep your version of the database current, you need to update your copy of the database manually:
processorServer.collectorHttpPort=<PORT>
processorServer.collectorHttpsPort=<PORT>
4.
browserBeaconSampling.maxSamples = <global_limit>
By default, the EUM Server retains the event snapshots for 90 days. If your Events Service retains events for fewer days (e.g., 14 days), you can safely
change the EUM Server's retention period to be the same as the Events Service's retention period. If the EUM Server retains the event snapshots for
fewer days than the Events Service, however, you may run into errors when viewing older events in the Controller UI.
When reducing the lifespan of event snapshots, you are not modifying the retention period of the Controller or the Events Service.
eventSnapshotStore.lifespanInDays = <no_of_days>
Below are the supported threshold values and the snapshots that would be retained. The default value is Slow.
browserBeaconSampling.hierarchyAwareSamplerPageUXThreshold = "<threshold>"
requestLog:
appenders: []
requestLog:
timeZone: UTC
appenders:
- type: file
archive: true
currentLogFilename: ../logs/access.log
archivedLogFilenamePattern: ../log/accedd-%d.log.gz
The table below lists and describes the supported EUM properties, lists defaults, and specifies whether the property is required. The values for the
database properties must conform with the MySQL syntax rules given in Schema Object Names.
onprem.dbUser eum_user Yes The user name for the EUM database.
onprem. -1 No The maximum disk space allotted for storing event snapshots. The default value of -1 allots unlimited disk space to
eventSnapshotDis store event snapshots. You can specify a positive integer representing the maximum number of bytes for storing
kAllowance event snapshots.
onprem. .. No The path to the directory storing EUM data such as snapshots.
fileStoreRoot /store
onprem. 365 No The number of days that crash reports are retained.
crashReportExpir
ationDays
onprem. 21474836 No The maximum disk space allotted for storing resource snapshots. The default maximum disk space is 20 GB or
resourceSnapshot 480 (20 21474836480 bytes. You can specify a positive integer representing the maximum number of bytes for storing
DiskAllowance GB) resource snapshots.
processorServer. 7001 No The HTTP port to the EUM Processor. The EUM Processor runs in one process containing the collector,
httpPort aggregator, crash-processor, and monitor services.
processorServer. true No The flag for turning enabling (true) or disabling HTTPS to the EUM Processor.
httpsProduction
processorServer. N/A No The password to the Key Store for the EUM Processor.
keyStorePassword
processorServer. bin No The path to the file that stores the password to the Key Store for the EUM Processor.
keyStoreFileName /ssugg.
keystore
processorServer. 7001 No The HTTP port of the EUM Collector. By default, the EUM Collector shares the same port as the EUM Processor,
collectorHttpPort but you can configure the port to be different. The EUM Collector receives the metrics sent from the JavaScript
agent.
analytics. true Yes The flag for enabling or disabling the Analytics Server.
enabled
analytics. http No The network protocol for connecting to the Analytics Server. It is only required with analytics.enabled=true.
serverScheme
analytics. events. No The hostname of the Analytics Server. It is only required with analytics.enabled=true.
serverHost service
.
hostname
analytics.port 9080 No The port to the Analytics Server. It is only required when analytics.enabled=true.
analytics. access- No The access key for connecting to the Analytics Server. It is only required with analytics.enabled=true.
accountAccessKey key
BrowserRecord
MobileSnapshot
SessionRecord
MobileSessionRecord
analytics. 8 No The number of days to retain the event records specified by analytics.eventTypeLifeSpan.0.eventType.
eventTypeLifeSpa If this property is set, you must also set analytics.eventTypeLifeSpan.0.eventType.
n.0.lifeSpan
BrowserRecord
MobileSnapshot
SessionRecord
MobileSessionRecord
BrowserRecord
MobileSnapshot
SessionRecord
MobileSessionRecord
analytics. 8 No The number of days to retain the event records specified by analytics.eventTypeLifeSpan.2.eventType.
eventTypeLifeSpa If this property is set, you must also set analytics.eventTypeLifeSpan.2.eventType.
n.2.lifeSpan
BrowserRecord
MobileSnapshot
SessionRecord
MobileSessionRecord
analytics. 8 The number of days to retain the event records specified by analytics.eventTypeLifeSpan.3.eventType.
eventTypeLifeSpa If this property is set, you must also set analytics.eventTypeLifeSpan.3.eventType.
n.3.lifeSpan
onprem. 7 No The EUM Collector searches for the dSYM file in the beacon traffic for the configured number of days. If the dSYM
mobileAppBuildTi file is not present during the configured time frame, a warning message is displayed in the Controller UI.
meSeriesRequestC
ountRollupDays
onprem. 10 No The maximum number of visible missing dSYM files in the Controller UI.
maxNumberOfMobil
eBuildsWithoutDs
ym
collection. true No The flag for enabling or disabling browser/mobile session collection. If you are upgrading the EUM Server from
sessionEnabled versions 4.2 and lower to 4.3 or higher, the default is false.
collection.accessControlAllowOrigins.0=http://example1.com
collection.accessControlAllowOrigins.1=http://example2.com
collection.accessControlAllowOrigins.2=http://example3.com
eventSnapshotSto 90 No The number of days that the event snapshots stored in the EUM Server's local blob store ($APPDYNAMICS_HOME
re. /EUM/eum-processor/store) are retained. The event snapshots only apply to crash reports, code issues, and
lifespanInDays IoT errors.
sessionization. 5 No The number of minutes that browser sessions are retained after they are closed. This allows browser sessions that
webSessionRetent begin and end at different times to be retained. The longer the configured retention time, the larger the number of
ionMins closed sessions held in memory, resulting in higher memory usage.
sessionization. 5 No The number of minutes that mobile sessions are retained after they are closed. This enables mobile sessions that
mobileSessionRet begin and end at different times to be retained. The longer the configured retention time, the larger the number of
entionMins closed sessions held in memory, resulting in higher memory usage.
throttling. 1000 No The maximum number of total resource snapshots retained each minute for an account.
resourceSnapshot
.
maxTotalPerMinPe
rAccount
throttling. 800 No The maximum number of resource snapshots of pages with a "Normal" user experience that are retained each
resourceSnapshot minute for an account. In general, you want the number for this property to be smaller than that for throttling.
. resourceSnapshot.maxTotalPerMinPerAccount, so you can also retain resource snapshots of pages with a
maxNormalPerMinP "Slow", "Very Slow", or "Stall" user experience.
erAccount
From EUM Server version 4.5.1 and later, the property crashProcessing.sessionEnabled is no longer supported. Instead, the association
of crashes with sessions is enabled by default. If you are using an earlier version (<4.5.1) of the EUM Server and want to upgrade to 4.5.1 or
higher, you will need to remove the property crashProcessing.sessionEnabled from the eum.properties file to prevent the EUM
Server from throwing errors.
Related pages:
Port Settings
Synthetic Server Endpoints
The EUM Server has different endpoints serving distinct functions. This page provides a reference for testing the health and getting information about on-
prem EUM Servers.
EUM API - acts as the interface between the EUM Server and the Controller. The Controller retrieves EUM data from the EUM Server through the
EUM API endpoint.
EUM Collector - collects metrics from the EUM agents. The JavaScript Agent and Mobile Agents transmit data to the EUM Server through the
EUM Collector endpoint.
EUM Aggregator - collects and rolls up all the metrics per application and provide an interface for Controllers to download the metrics by
application and timestamp.
Screenshot Service - collects and serves image tiles that form mobile screenshots. The Mobile Agents transmit the tiles to the Screenshot
Service, and the Controller retrieves the tiles to display the screenshots in mobile sessions.
/beacons/browser The V2 and latest endpoint for receiving CORS beacons from
/v2/* the JavaScript Agent.
/get-version Returns the version, build, and commit, and timestamp of the
EUM Processor.
/iot/v1 The endpoint for IoT REST APIs. See the IoT REST API
/application/* reference documentation for details.
/get- Returns the version, build, and commit, and timestamp of the EUM
version Processor.
Screenshot http(s)://<domain-name>:7001/
/version Returns the version, build, commit, and timestamp of the Screenshot
Service screenshots/v1
Service.
Related pages:
By default, the locations of end-users are resolved using public geographic databases. You can host an alternate geo server for your countries, regions,
and cities instead of using the default geo server hosted by AppDynamics.
You have intranet applications where the public IP address does not provide meaningful location information, but the user's private IP does.
You have a hybrid application where some users access the application from a private location and some access it from a public one. If a user
doesn't come from a specific private IP range mapped by the custom geo server, the system can be set to default to the public geo server.
GeoServer
schema.xsd <-- schema for geo-ip-mapping.xml configuration
geo
WEB-INF
classes
logback.xml <-- configure logging in here
...
web.xml <-- other configurations here
...
| geo-ip-mappings.xml <-- configure geo ip mapping here
...
To install the geo server, copy the geo folder to the TOMCAT_HOME/webapps of your Tomcat server. Do not deploy the server in the same container as
the Controller.
Use the sample file in the geo subdirectory as a template. Any modifications at runtime are reloaded without a restart.
This file contains a <mapping> element for every location to be monitored. The file has the following format.
<mappings>
<mapping>
<subnet from="192.168.1.1" mask="255.255.255.0"/>
<location country="United States" region="California" city="San Francisco"/>
</mapping>
<mapping>
<ip-range from="10.240.1.1" to="10.240.1.254" />
<location country="France" region="Nord-Pas-de-Calais" city="ENGLOS" />
</mapping>
This data is visible in browser snapshots and can be used to filter browser snapshots for specific locations: The <country>, <region>, and <city> elem
ents are required. If the values of <country> and <region> do not correspond to an actual geographic location already defined in the geographic database,
map support is not available for the location in the map panel, but Browser RUM metrics are displayed for the location in the grid view of the geographic
distribution, end user response time panel, trend graphs, browser distribution panel, and in the Metric Browser. The <city> element can be a string that
represents the static location of the end-user. You will notice a <default> element. If there is an IP address that is not covered by your IP mapping file,
this is the value that is used. To use a public geo server for non-covered IP addresses, see Using a Hybrid Custom-Public Geo Server Setup.
In previous versions of the geo server, the enclosing tag was a <context-param>. This has now been changed to an <init-param>.
An IP address set by customizing the JavaScript agent. For more information, see Set the Origin Location of the Request.
An explicit query parameter: for example, http://mycompany.com/geo/resolve.js?ip=196.166.2.1.
An IP provided using the AD-X-Forwarded-For header
An IP provided using the X-Real-IP header
An IP provided using the X-Forwarded-For header
The remote address of the HTTP request
Debugging
Because this debugging feature has a small performance impact, it should be turned off before putting the geo server into production.
To aid in debugging, the geo server ships with a debugging web interface enabled. You can reach this interface by navigating to http://<host>:<port>
/geo/debug with a web browser.
The first tab, Configuration, displays the contents of the mapping file currently in use.
The second tab, History, shows the last few geo resolutions that have been performed.
<web-app ..>
<!-- ... -->
<servlet>
<servlet-name>FrontControllerServlet</servlet-name>
<servlet-class>com.appdynamics.eum.geo.web.FrontControllerServlet</servlet-class>
<!-- ... -->
<init-param>
<param-name>HISTORY_MAX_COUNT</param-name>
<param-value>20</param-value>
</init-param>
</servlet>
<!-- ... -->
</web-app>
The third tab, Test, can be used to test the mapping file by trying to resolve an arbitrary IP address.
When you first navigate to this tab, it shows the geo resolution for your browser's IP address. The form in this tab can be used to try the resolution of
another IP address.
Disabling debug
Open TOMCAT_HOME/webapps/geo/WEB-INF/web.xml and set DEBUG_ENABLED to false.
This page describes how to check version information and the version of components bundled with the EUM Server. This information is useful when
troubleshooting the system or performing other administrative tasks.
AppDynamics maintains and updates the bundled components as part of the AppDynamics Platform. Do not attempt to upgrade a bundled
component independently of the platform upgrade procedure.
curl http(s)://<domain-name>:7001/v2/version
To get more information about the EUM Server, see EUM Server Endpoints.
This page describes how to upgrade an EUM Server to the latest production installation. This is usually done alongside an upgrade to the other platform
components, such as the Controller and Events Service.
Upgrade Procedure
The instructions below show you how to upgrade your EUM Server to the latest version of the production EUM Server. Because in the EUM Server 4.4, the
EUM MySQL database was moved from the Controller host machine to the EUM Server host machine, there are separate instructions below to help you
migrate your data from versions earlier than 4.4 to the latest version. If you are upgrading from EUM Server 4.4 or higher to the latest version, you will not
have to migrate your data, but you are advised to make a backup of your data.
The following sections provide troubleshooting information for the EUM Server installation.
1. Check the Controller logs for errors in attempting to connect to the EUM Server. Also, see if the Controller UI allows you to enable EUM. If so, it's
likely that the connection between the Controller and EUM Server is working.
2. Check the logs of the EUM Server, especially <EUM_home>/logs/eum-processor.log. In the log, verify that the server started successfully
and is receiving beacons from agents.
3. Make sure that the EUM JavaScript Agent is actually injected into the monitored page and that the agent can load the remote JavaScript.
4. Use browser debugging tools to check for JavaScript errors in the monitored page.
With the Controller running and accessible to the EUM Server machine, install the license manually. Before starting, make sure the license.lic file is at
an accessible location on the EUM Server machine. Then install the license as follows:
1. Verify that the JAVA_HOME/bin is in the system PATH variable and points to a Java 1.7 instance.
2. In Windows, open an elevated command prompt (run as administrator).
3. From the command line, navigate to the eum-processor directory under your AppDynamics home.
4. From the eum-processor directory, run the following script:
On Linux:
./bin/provision-license <path_to_license_file>
On Windows:
bin\provision-license.bat <path_to_license_file>
See EUM Server Deployment for operating system requirements, including recommended settings for nofile and nproc limits for the EUM Server
operating system.
Related pages:
The AppDynamics Events Service is the on-premises data storage facility for unstructured data generated by Application Analytics, Database Visibility, and
End User Monitoring deployments.
The following information and instructions are intended for on-premises deployments only. SaaS deployments are managed by AppDynamics.
If you are installing the server components for End User Monitoring, Application Analytics, or Database Visibility, you need to use a scaled-out on-premises
Events Service. Scaled-out means that the service is not installed on the Enterprise Console host. This type of Events Service can be deployed as a single
node or a cluster of three or more nodes based on your needs. Additionally, you can add nodes to a scaled-out Events Service after you install it. It is not
recommended that you add the Controller host as part of the cluster. See Administer the Events Service.
However, for data redundancy and storage scalability or if you are using End User Monitoring, Application Analytics or Database Visibility in an on-
premises Controller deployment, you need to deploy a dedicated Events Service installation.
Multiple AppDynamics components that depend on the Events Service should use the same Events Service instance or cluster.
The Controller includes an embedded Events Service instance used by the Database Visibility product by default. However, the embedded Events Service
is not meant to be used with production Application Analytics or EUM installations, since it runs on the same machine as the Controller and does not offer
data replication or scalability. It may be used for small-scale environments, especially those intended for demonstration or learning purposes. Note
however that it's not possible to migrate data from the embedded Events Service to an external Events Service instance if upgrading later.
Your Events Service can be deployed to support multiple Controllers, becoming a shared Events Service.
Single node deployment is recommended for test environments only. Production environments should deploy a multi-node cluster (see below
for details).
Multi-Node Cluster
A multi-node cluster is made up of three or more nodes. With a cluster, the Controller and other Events Service clients, the EUM Server and Analytics
Agent connect to the Events Service through a load balancer, which distributes load to the Events Service cluster members.
AppDynamics recommends multi-node clusters for production environments. Multi-node clusters provide the following benefits:
In a single-node deployment, connect through a load balancer or directly to the Events Service.
The nodes in a cluster swap a large amount of data. For this reason, when deploying a cluster, make sure to install all cluster nodes within the same local
network, ideally, attached to the same network switch.
Supported Deployments
Use this table to determine the supported deployment type and environment for the Events Service.
Multi-Node Clustered Events Service (3+ nodes) (version 20.2 or later) Yes Yes
When compared to a multiple Events Service cluster configuration, a shared Events Service configuration requires less maintenance and lowers cost.
Since the Events Service is horizontally scalable, having a single large instance provides the same functionality as multiple instances.
There is no limit to the number of Controllers that can share an Events Service. However, it is recommended that you use separate Events Services for
dev and prod. Work with your AppDynamics account representative if you plan to expand your shared Events Service cluster.
Default Ports
The default ports used by the Events Service are:
The Events Service cluster members use additional ports for internal communication among the cluster members. All the ports used within the cluster are
listed in the Events Service configuration file, conf/events-service-api-store.properties.
Related pages:
This page describes general hardware and software requirements for the machines that host Events Service nodes.
General Requirements
Determine which version of the Events Service that is compatible with your other platform components.
Use a supported Windows 64-bit or Linux 64-bit based operating system supported by the platform. See Platform Requirements.
Solid-state drives (SSD) can significantly outperform hard disk drives (HDD), and are therefore recommended for production deployments. Ideally,
the disk size should be 1 TB or larger.
The Events Service must run on dedicated machines with identical directory structures, user account profiles, and hardware profiles.
For heap space allocation, AppDynamics recommends allocating half of the available RAM to the Events Service process, with a minimum of
7 GB up to 31 GB.
When testing the events ingestion rate in your environment, it is important to understand that events are batched. Ingestion rates observed at the
scale of a minute or two may not reflect the overall ingestion rate. For best results, observe ingestion rate over an extended period of time,
several days at least.
The Events Service requires Java 8u172.
Keep the clocks on your application, Controller, and Events Service nodes in sync to ensure consistent event time reporting across the
AppDynamics deployment.
Your firewall should not block the Events Service REST API port 9080, otherwise, the Enterprise Console will not be able to reach the Events
Service remotely.
Once you determine your license units requirements, it is important to consider other factors that affect the hardware capacity, such as the processing load
of queries run against the Events Service and the actual type of hardware used. A physical server is likely to perform better than a virtual machine. You
should also take into account seasonal or daily spikes in activity in your monitored environment in your considerations.
An event is the basic unit of data in the events service. In terms of application performance management, a Transaction Analytics event corresponds to a
call received at a tier. A business transaction instance that crosses three tiers, therefore, would result in three events being generated. In application
performance management metrics, the number of business transaction instances is reflected by the number of calls metric for the overall application. In
End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report.
For additional Events Service sizing information, see the following AppDynamics Community articles:
Understanding EUM and Events Service concepts (describes the concepts necessary to build the profile)
Build the Analytics traffic profile
Size the Events Service and EUM using the profile (contains the Analytics Recipe Book for on-premises configuration)
Limit EUM and Analytics usage (describes how to configure rules to enforce Analytics trade-offs when deploying Analytics on-premises)
The hardware shown for each license amount represents the hardware capacity of a theoretical combined load of both Transaction Analytics and Log
Analytics events. The numbers used were derived from actual tests that were performed with an uncombined load, from which the following numbers were
extrapolated. Note that the test conditions did not include query load and so may not be representative of a true production analytics environment.
The following table shows sizing recommendations and describes the size of the cluster used for testing. This does not mean you are limited to
a seven-node event service. If you need to go beyond seven nodes, contact your AppDynamics account representative to ensure proper sizing
for your specific environment.
It is to be noted that the retention can be 8, 30, 60 or 90 days which will directly affect storage requirements.
i2.2xlarge (61 GB RAM, 8 vCPU, 1600 GB i2.4xlarge (122 GB RAM, 16 vCPU, 3200 GB i2.8xlarge (244 GB RAM, 32 vCPU, 6400 GB
SSD) SSD) SSD)
The following points describe the test conditions under which the license units-to-hardware profile mappings in the table were generated:
The tests were conducted on virtual hardware and programmatically generated workload. Real-world workloads may vary. To best estimate your hardware
sizing requirements, carefully consider the traffic patterns in your application and test the Events Service in a test environment that closely resembles your
production application and user activity.
In End User Monitoring, each page view equates to an event, as does each Ajax request, network request, or crash report. There can be a few dozen Ajax
requests for every page load. In general, the ingestion capacity and sizing profile for EUM or Database Visibility Analytics events are equivalent to that for
Log Analytics, with the size of the raw events being about 2 kilobytes on average.
To calculate the sizing for EUM, multiply the peak number of browser records in a day by 12 KB. If peak capacity is reached, the Events Service simply
starts dropping traffic.
The table below provides details about the memory and storage of different types of browser records. The default retention period is configurable.
Browser Record Type Memory Requirements Per Event Optional Default Retention
Related pages:
This page describes how to prepare the machine that will host Events Service nodes, along with general requirements for the environment.
For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-34-201-
129-89.us-west-2.compute.amazonaws.com.
On each machine, the following ports need to be accessible to external (outside the cluster) traffic:
For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables
or OS-level firewall software on each machine to open the ports listed
9300 – 9400
The following shows an example of iptables commands to configure the operating system firewall:
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Enterprise Console command output
and logs will include an error message similar to the following:
If you see this error, make sure that the ports indicated in this section are available to other cluster nodes.
vm.max_map_count=262144
Replace username_running_eventsservice with the username under which the Events Service processes run. So if you are running
Analytics as the user appduser, you would use that name as the first entry.
The Enterprise Console needs to be able to access each cluster machine using passwordless SSH for a non-embedded Events Service. Before starting,
enable key-based SSH access as described here.
This setup involves generating a key pair on the Enterprise Console and adding the public key as an authorized key on the cluster nodes. The following
steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your environment.
If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for
your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.
1. Log in to the Enterprise Console host machine or switch to the user you will use to perform the deployment:
su - $USER
2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
mkdir -p ~/.ssh
chmod 700 ~/.ssh
cd .ssh
5. Enter a name for the file in which to save the key when prompted, such as appd-analytics.
6. Rename the key file by adding the .pem extension:
mv appd-analytics appd-analytics.pem
You will later configure the path to it as the sshKeyFile setting in the Enterprise Console configuration file, as described in Deploying an Events
Service Cluster.
7. Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
9. Test the configuration from the host machine by trying to log in to a cluster node by ssh:
ssh host1
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified the
operating system firewall rules to accept SSH connections. If successful, you can use the Enterprise Console to deploy the platform.
If you encounter the following error, use the instructions in this section to double-check your passwordless SSH configuration:
Related Pages:
This page describes how to install and administer the Events Service on Linux systems through the CLI. Steps for scaling up an embedded Events Service
using the Enterprise Console are also included.
The AppDynamics Enterprise Console automates the task of installing and administering an Events Service deployment through either the GUI or CLI. For
information on installing Events Service using the Enterprise Console, see Custom Install.
You do not need to specify the installation or data directory for the Events Service installation. If you do, use a different one from the platform
directory.
The Events Service can be deployed as a single node or as a multi-node cluster of 3 or more nodes.
The versions of Linux supported include the flavors and versions supported by the Controller, as indicated by Prepare Linux for the Controller.
The Events Service must run on a dedicated machine. The machine should not run other applications or processes not related to the Events
Service.
Use appropriately sized hardware for the Events Service machines. The Enterprise Console checks the target system for minimum hardware
requirements. For more information on these requirements, see the description of the profile argument to the Events Service install command in In
stall the Events Service Cluster.
The Controller and Events Service must reside on the same local network and communicate by the internal network. Do not deploy the cluster to
nodes on different networks, whether relative to each other or to the Controller where the Enterprise Console runs. When identifying cluster hosts
in the configuration, you will need to use the internal DNS name or IP address of the host, not the externally routable DNS name.
For example, in terms of an AWS deployment, use the private IP address such as 172.31.2.19 rather than public DNS hostname such as ec2-
34-201-129-89.us-west-2.compute.amazonaws.com.
Make sure that the appropriate ports on each Events Service host are open. See Port Settings for more information.
The Enterprise Console uses an SSH key to access the Events Services hosts. See the section below for information on generating the key.
Events Service nodes normally operate behind a load balancer. When installing an Events Service node, the Enterprise Console automatically
configures a direct connection from the Controller to the node. If you deploy a cluster, the first primary node is automatically configured as the
connection point in the Controller. You will need to reconfigure the Controller to connect through the load balancer VIP after installation, as
described below. For sample configurations, see Load Balance Events Service Traffic.
Port Settings
Each machine must have the following ports accessible to external (outside the cluster) traffic:
For a cluster, ensure that the following ports are open for communication between machines within the cluster. Typically, this requires configuring iptables
or OS-level firewall software on each machine to open the ports listed
9300 – 9400
The following shows an example of iptables commands to configure the operating system firewall:
If a port on the Events Service node is blocked, the Events Service installation command will fail for the node and the Enterprise Console command output
and logs will include an error message similar to the following:
For example, using ssh-keygen, you can create the key using the following command:
This setup involves generating a key pair on the Enterprise Console host and adding the Enterprise Console's public key as an authorized key on the
cluster nodes. The following steps take you through the configuration procedure for an example scenario. You will need to adjust the steps based on your
environment.
If you are using EC2 instances on AWS, the following steps are taken care of for you when you provision the EC2 hosts. At that time, you are prompted for
your PEM file, which causes the public key for the PEM file to be copied to the authorized_keys of the hosts. You can skip these steps in this case.
1. Log in to the Enterprise Console machine or switch to the user you will use to perform the deployment:
su - $USER
2. Create a directory for SSH artifacts (if it doesn't already exist) and set permissions on the directory, as follows:
mkdir -p ~/.ssh
chmod 700 ~/.ssh
cd .ssh
mv appd-analytics appd-analytics.pem
You will later configure the path to it as the sshKeyFile setting in the Enterprise Console configuration file, as described in Deploying an Events
Service Cluster.
7. Transfer a copy of the public key to the cluster machines. For example, you can use scp to perform the transfer as follows:
9. Test the configuration from the Controller machine by trying to log in to a cluster node by ssh:
ssh host1
If unable to connect, make sure that the cluster machines have the openssh-server package installed and that you have modified
the operating system firewall rules to accept SSH connections. If successful, you can use the Enterprise Console on the Controller host
to deploy the Events Service cluster, as described next.
If the Enterprise Console attempts to install the Events Service on a node for which passwordless SSH is not properly configured, you will see the following
error message:
If you encounter this error, use the instructions in this section to double-check your passwordless SSH configuration. Also, you need to and add-hosts firs
t then install event-service. If SSH configuration is not setup correctly, add-hosts commands will fail.
The add-hosts command is to add hostnames into platforms. During the events-service installation, the hosts come from the platform
hosts, and then they are used on the events-service.
vm.max_map_count=262144
3. Disable swap memory by running the following command. Remove swap mount points by removing or commenting the lines in /etc/fstab that
contain the word swap.
swapoff -a
The installation directory is the directory where the Enterprise Console installs all platform components.
The same installation directory should exist and is used on all remote nodes. This is done to maintain the homogeneity of the
configuration across different nodes.
5. Add the SSH key that the Enterprise Console will use to access and manage the Events Service hosts remotely. (See Create the SSH Key for
more information):
<file path to the key file> is the private key for the Enterprise Console machine. The installation process uses the keys to connect to
the Events Service hosts. The keys are not deployed, but instead, encrypted and stored in the Enterprise Console database.
Arguments are:
jvmTempDir: Use this argument to override default JVM temporary /tmp directory in Linux installations.
hosts: Use this argument or host-file to specify the internal DNS hostnames or IP addresses of the cluster hosts in your
deployment. With this argument, pass the hostnames or addresses as parameters. For example:
host-file: As an alternative to specifying hosts as --hosts arguments, pass them as a list in a text file you specify with this argument.
Specify the internal DNS hostname or IP address for each cluster host as a separate line in the plain text file:
profile: By default (with profile not specified), the installation is considered a production installation. Specifying a developer profile (pr
ofile dev) directs the Enterprise Console to use a reduced hardware profile requirement, suitable for non-production environments
only. The Enterprise Console checks for the following resources:
For example:
9. Log in to each Events Service node machine, and run the script for setting up the environment as follows:
chmod +x tune-system.sh
./tune-system.sh
sudo <installation_dir>/events-service/processor/bin/tool/tune-system.sh
10. If you are using a load balancer, use the virtual IP for the Events Service as presented at the load balancer. Configure the Controller connection
to the Events Service as follows:
It may take a few minutes for the Controller and Events Service to synchronize account information after you modify connection settings in the
console.
When finished, use the Enterprise Console for any Events Service administrative functions. You should not need to access the cluster node machines
directly once they are deployed. In particular, do not attempt to use scripts included in the Events Service node home directories.
The Enterprise Console automatically updates the Controller configurations after installation.
1. Set up load balancing. See Load Balance Events Service Traffic for information about configuring the load balancer.
2. Open the Enterprise Console GUI.
3. Verify that the credentials and hosts you want to use are added to the AppDynamics platform. For more information, see Administer the
Enterprise Console.
a. On the Credential page, add the SSH credentials for the hosts on which you want to install the Events Service.
b. On the Hosts page, add the hosts. The Enterprise Console uses these hosts for the scaled-up Events Service, which requires at least
one host or three or more hosts.
4. On the Events Service page, navigate to More link and select the Events Service and click Scale Up Events Service under More and complete
the wizard. When you enter hosts to use for a scaled-up Events Service, do not include the Controller host.
You do not need to restart the Controller since that is automatically done for you by the scale-up job.
5. Log in to each node machine, and run the script for setting up the environment as follows:
sudo <installation_dir>/events-service/latest/bin/tool/tune-system.sh
Note that only newly generated Database Monitoring data will be stored in the Events Service; previously collected data will remain in
the embedded Events Service instance unless it is migrated to the new Events Service. See Connect to the Events Service.
9. It may take a few minutes for the Controller and Events Service to synchronize account information after you modify connection settings in the
Enterprise Console.
Troubleshooting Installation
If the Enterprise Console crashes or shuts down while installing the Events Service, the GUI may display that the installation is in progress. To resolve this
issue, uninstall the Events Service with the CLI. Then, install the Events Service with the CLI.
Related pages:
You can install and administer the Events Service on Microsoft Windows systems as a single node or a cluster. Common use cases include:
Single-node Events Service — Good for demonstration purposes and other scenarios where data redundancy and high availability are not
required.
Three-node cluster — The minimum size for a production Events Service cluster.
Cluster of four nodes or more — For deployments where increased load or expected sizing exceeds the capacity of a three-node cluster.
Contact AppDynamics customer support if you anticipate deploying a cluster of 10 or more nodes.
Creating more than one instance of the same service type is not supported on Windows.
You do not need to specify the installation or data directory for the Events Service installation. If you do, use a different one from the platform
directory.
The master node is the first node to start up. If the master node becomes unavailable, the worker nodes attempt to elect a new master.
As you install Events Service, you specify the configuration for each node, which consists of two pieces of information:
Specifying the installation or data directory is optional for the Events Service. If you do this, the directory you specify must not be the platform
directory.
Ensure that you have met all of the current Events Service Requirements, including:
1. Unzip the Events Service distribution archive to a directory on the target host. This creates the events-service directory with the Events
Service artifacts.
2. Begin configuring the connection to the Events Service in the Controller.
a. With the Controller running, open the Administration Console as the root user.
b. In the Controller Settings page, search for appdynamics.on.premise.event.service.url.
c. Replace the default value to the internal hostname for the Events Service machine, and the default value for the Events Service listen
port, 9080.
For example: http://hostname:9080. Select Save.
If you are putting a load balancer in front of the Events Service (required for a cluster), this will be the VIP for the Events
Service as exposed at the load balancer. In this case, it is likely you will need to return to this step after you finish deploying
the node and configuring the load balancer.
If you do not install the default Events Service, then the appdynamics.on.premise.event.service.key setting is
blank. To create the appdynamics.on.premise.event.service.key, use a UUID generator.
b. Copy the setting's value. You will need this to complete Step 4.
c. Close the Administration Console.
4. Configure the connection from the Events Service to the Controller:
ad.accountmanager.key.controller=controller-key
a. Verify that the minimum and maximum heap settings for the two Events Service processes (the Events Service JVM, and the
Elasticsearch processes, respectively) are correct and sufficient for your deployment.
The settings:
ad.jvm.options.name=events-service.vmoptions
ad.jvm.heap.min=1g
ad.jvm.heap.max=1g
ad.es.jvm.options.name=events-service.vmoptions
ad.es.jvm.heap.min=8g
ad.es.jvm.heap.max=8g
A production Elasticsearch installation requires 8 GB. For a demonstration installation, you can retain the default of 1 GB.
a. Set the JAVA_HOME environment variable so it specifies your Java installation directory. For example: JAVA_HOME=C:\Zulu\zulu-8-
jre.
b. Change the directory to events-service, and then enter:
i. This command also installs Elasticsearch (even though it contains no explicit reference to Elasticsearch).
ii. The optional auto-start flag causes the Events Service to be installed as an automatically started service; without this flag,
the Events Service is installed as a manually started service.
iii. For verbose installation and operation logging (useful for troubleshooting), include the log-verbose flag.
8. Locate the service name for the Events Service:
bin\events-service.exe service-list
9. Open the Windows services and select the AppDynamics Events Service Api Store xxxxx Select Start.
10. Check the health of the new node and verify service status:
a. If "Healthy" appears as the service status, then it indicates that the process is operating normally:
b. For the port, pass the administration port for the Events Service, 9081 by default.
c. If the service status does not display as Overall status Healthy, then the service is unhealthy. To correct it, you need to determine
the correct key mappings between the following Events Service configuration and the Controller settings:
i. appdynamics.es.eum.key should map to ad.accountmanager.key.eum
ii. appdynamics.saas.event.service.key should map to ad.accountmanager.key.controller
iii. appdynamics.saas.event.service.key should map to ad.accountmanager.key.mds
iv. If the values of the key mappings are blank in the Admin Console, then use a UUID generator to create them.
v. Use a UUID generator to create a value for ad.accountmanager.key.ops.
vi. Use a UUID generator to create a different value for the following keys (you can use the same UUID for all three keys):
1. ad.accountmanager.key.slm
2. ad.accountmanager.key.jf
3. ad.accountmanager.key.service
11. Configure the connections from the Analytics Agent, EUM Server, or Database Monitoring agents to the Events Service, as described in Connect
to the Events Service.
Note that all services on Windows machines must be installed on the Enterprise Console host since the Enterprise Console does not support
remote operations on Windows. Therefore, you cannot use the Enterprise Console GUI to deploy an Events Service cluster.
Before starting, review the topology notes in Events Service Deployment and make sure that all machines in the cluster meet the system requirements.
1. Follow the steps for configuring a single node cluster in the 1-node installation above. Additionally, configure the following settings in the conf\ev
ents-service-api-store.properties file:
a.
ad.es.node.minimum_master_nodes=2
The setting specifies the minimum number of master-eligible instances that must be available in order to elect a new master. Since an
Events Service cluster has three master nodes, this value should be two for a cluster.
b. Set the value of ad.es.event.index.shards to the number of nodes, in this case, three:
ad.es.event.index.shards=3
You do not need to change this value if it is already higher than the number of nodes.
c. Set the replication factor to 1 by changing the ad.es.event.index.replicas and ad.es.metadata.replicas properties, as
follows:
ad.es.event.index.replicas=1
ad.es.event.index.hotLifespanDays=10
ad.es.metadata.replicas=1
d. For the unicast hosts property, add the hostname or IP address, along with the port 9300, for each node in the cluster:
ad.es.node.unicast.hosts=node1.example.com:9300,node2.example.com:9300,node3.example.com:9300
e. Change the publish host to the IP address or hostname of this machine. For example:
ad.es.node.network.publish.host=node2.example.com
f. Configure heap space for the Events Service and ElasticSearch processes, as follows:
i. To set the Events Service process heap size to 1 GB, for example, use the following properties:
ad.jvm.options.name=events-service.vmoptions
ad.jvm.heap.min=1g
ad.jvm.heap.max=1g
For the setting value, g indicates gigabyte (GB), and m indicates megabyte (MB).
ii. For the ElasticSearch process, the heap size should be set to half the size of the available RAM on the system, up to a
maximum of 31 GB. To set the ElasticSearch process heap size to 8 GB, for example, set the properties as follows:
ad.es.jvm.options.name=events-service.vmoptions
ad.es.jvm.heap.min=8g
ad.es.jvm.heap.max=8g
For the setting value, g indicates gigabyte (GB), and m indicates megabyte (MB).
g. Save and close the file.
2. Install the Events Service as a Windows service:
The optional auto-start flag causes the Events Service to be installed as an automatically started service. If you do not include the flag, the Events
Service is installed as a manually started service. An additional option, log-verbose, increases the verbosity of installation and operation
logging, which is useful for troubleshooting.
3. Enter the following command to find the service name for the Events Service:
bin\events-service.exe service-list
4. Pass the service name returned by the service-list command as the -s parameter argument in the following command:
For the port, pass the administration port for the Events Service, 9081 by default. Verify that "Healthy" appears as the service status, indicating
that the process is operating normally:
6. Configure a load balancer to distribute traffic to the Events Service cluster, as described in Load Balance Events Service Traffic.
7. Connect the Controller and other clients—Analytics Agent, EUM Server, or Database Monitoring agents—to the Events Service, as described in C
onnect to the Events Service.
Note that all services on Windows machines must be installed on the Enterprise Console host since the Enterprise Console does not support
remote operations on Windows. Therefore, you cannot use the Enterprise Console GUI to expand an Events Service cluster.
Before starting, prepare the new cluster machine. Verify system requirements and prepare the environment as described above.
For each node beyond the original three master nodes, download and configure the nodes as previously described. The configuration steps for any nodes
added to the cluster after the initial three master nodes are as follows:
1. For each cluster nodes beyond the initial three master nodes, open the conf\events-service-api-store.properties for editing and
make these configuration changes:
ad.es.node.master=false
ad.es.node.minimum_master_nodes=2
c. Set the value of ad.es.event.index.shards to the number of nodes in the cluster. You do not need to change this value if it is
already higher than the number of nodes.
ad.es.event.index.shards=<number_of_nodes>
You do not need to change this value if it is already higher than the number of nodes.
d. For the unicast hosts property, add the hostnames or IP addresses of all nodes in the cluster, including the node you are adding. For
each node specify the ports on which the nodes communicate, 9300-9400. For example:
ad.es.node.unicast.hosts=node1.example.com[9300-9400],node2.example.com[9300-9400],node3.example.
com[9300-9400],node4.example.com[9300-9400]
You do not need to reconfigure the unicast hosts settings for existing cluster members, as the new node can join the cluster dynamically.
e. Change the publish host to the IP address or hostname of this machine. For example:
ad.es.node.network.publish.host=node4.example.com
f. Configure heap space for the Events Service and ElasticSearch processes, as follows:
i. To set the Events Service process heap size to 1 GB, for example, use the following properties:
i.
ad.jvm.options.name=events-service.vmoptions
ad.jvm.heap.min=1g
ad.jvm.heap.max=1g
For the setting value, g indicates gigabyte (GB), and m indicates megabyte (MB).
ii. For the ElasticSearch process, the heap size should be set to half the size of the available RAM on the system, up to a
maximum of 31 GB. To set the ElasticSearch process heap size to 8 GB, set the properties as follows:
ad.es.jvm.options.name=events-service.vmoptions
ad.es.jvm.heap.min=8g
ad.es.jvm.heap.max=8g
For the setting value, g indicates gigabyte (GB), and m indicates megabyte (MB).
g. Save and close the file.
2. Install the Events Service as a Windows service:
The optional auto-start flag causes the Events Service to be installed as an automatically started service. If you do not include the flag, the Events
Service is installed as a manually started service. An additional option, log-verbose, increases the verbosity of installation and operation
logging, which is useful for troubleshooting.
3. Enter the following command to find the service name for the Events Service:
bin\events-service.exe service-list
4. Pass the service name returned by the service-list command as the -s parameter argument in the following command:
Note: At least two nodes must be running before you run the command.
For the port, pass the administration port for the Events Service, 9081 by default. Verify that "Healthy" appears as the service status, indicating
that the process is operating normally:
6. Modify your load balancer rules to include the new cluster node. For more information, see Load Balance Events Service Traffic.
bin\events-service.exe stop
1. Enter the following command to find the service name for the Events Service:
bin\events-service.exe service-list
2. Pass the service name returned by the service-list command as the -s parameter argument to the following command. Enclose the service
name in double-quotes.
Remove a Node
To remove a node that is not enabled for operation as a master node from the cluster, simply stop the Events Services on the node or remove the machine
it runs on from the network.
You cannot remove nodes such that the resulting cluster size is two
A cluster that consists of three or more nodes can't be reduced in size to a single node Events Service.
After you remove a node, be sure to adjust your load balancer rules to remove the old cluster member. See Load Balance Events Service Traffic for more
information.
If you are not using a load balancer with a cluster deployment, keep in mind that the connection settings for the first master node that reports to the
Controller at installation time are written to the Controller setting that identifies the Events Service to the Controller. If you remove a master node in that
case, check whether the removed master node is node identified as the Events Service destination URL in the Controller connection settings; adjust the
setting if so. See Connect to the Events Service for more information.
To reconfigure an existing node to enable operation as a master node, or add a new node with the master option enabled:
ad.es.node.master=true
This component is available for on-premises deployments only. SaaS deployments are managed by AppDynamics.
This page describes how to manage Events Service with the Enterprise Console. All of these tasks can be performed on the GUI or CLI.
On Linux
Start the Events Service on Linux by running this command:
On Windows
Start the Events Service on Windows by running this command:
You can check the status of the cluster from the Controller page in the Enterprise Console GUI or the Controller machine using this command:
bin/platform-admin.sh show-events-service-health
The output shows possible issues and the steps you need to take to resolve them. For example, if the available disk is low, the resolution is to add nodes
to the cluster.
Cluster out of capacity If the heap size of any Events Service Java Add Events Service nodes
process exceeds 80% utilization
Disk size remaining drops below The disk size of the identified node dropped Add Events Service nodes
30% below 30%
Events Service is not reachable but The Events Service process on the identified Restart the node
the host is reachable node is not functioning properly
Cluster needs restart A condition has been identified that requires a Restart the cluster
cluster restart
Cluster size is 2 Events Service cluster requires more than two Add a node
nodes
Before starting, prepare the new cluster machine. Verify system requirements and prepare the environment as described in Events Service Requirements.
It is important for any new machine in the cluster to have the same SSH-enabled user account as existing cluster members.
Once you have prepared the system, run the command for adding nodes:
The file you pass to the command (hosts.txt in the example) should contain the internal DNS hostnames or IP addresses of the nodes to add. It does
not need to list existing nodes in the cluster. These hosts should be part of the platform. For more information about how to add a host to the platform, see
Administer the Enterprise Console.
Be sure to modify your load balancer rules to include the new cluster member in its routing rules. See Load Balance Events Service Traffic for more
information.
Once you have prepared the system, run the command for restarting your node:
Once you have prepared the system, run the command for restarting your cluster nodes:
The remove-events-service-node command removes the Events Service software and data from a single node that you specify by hostname. You
should only use this command if you have at least four nodes in your cluster. Removing an Events Service node from a three-node cluster is not
supported. Identify the node to remove using the --node command line parameter.
After you remove a node, be sure to adjust your load balancer rules to remove the old cluster member. See Load Balance Events Service Traffic for more
information.
If you are not using a load balancer with a cluster deployment, keep in mind that the connection settings for the first primary node that reports to the
Controller at installation time are written to the Controller setting that identifies the Events Service to the Controller. If you remove a primary node in that
case, check whether the removed primary node is node identified as the Events Service destination URL in the Controller connection settings (e.g., appdyn
amics.on.premise.event.service.url) and adjust the setting if so. See Connect to the Events Service for more information.
You cannot remove nodes such that the resulting cluster size is two
A cluster that consists of three or more nodes can't be reduced in size to a single node Events Service.
The remove-events-service-node command removes the Events Service software and data from a single node that you specify by hostname. You
should only use this command if you have at least four nodes in your cluster. Removing an Events Service node from a three-node cluster is not
supported. Identify the node to remove using the --node command line parameter.
If you attempt to remove a primary node using the command shown above, the Enterprise Console notifies you that you are attempting to remove a
primary node and cancels the operation. As indicated in the output, you can proceed to remove the primary node by rerunning the command with the -f for
ce flag. When you remove a primary node, the cluster elects a new primary node from the existing data nodes. The election process may take a few
seconds, during which new events cannot be processed. Be sure to perform this operation at a time when the impact of a brief interruption of service will
be minimal.
If you have an unreachable node you would like to remove, but cannot due to the above restrictions, you can choose to replace it instead.
This command replaces the old node specified in the argument with the new node:
After removing the Events Service nodes from the cluster, you may observe that the value appdynamics.es.eum.key changed in
the Controller admin.jsp and in the Events Service properties file, but not in the EUM properties file, analytics.
accountAccessKey.
Check if the key value changed in the Controller and the Events Service, then replace the key value with the EUM properties file: anal
ytics.accountAccessKey.
2. After successfully logging in, submit an enable-ssl job for the Events Service, providing the path to the KeyStore file, the KeyStore password,
and KeyStore alias.
3.
4. The output of the cURL command should show the TLS handshakes and the HTTP status 200:
...
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
...
< HTTP/1.1 200 OK
< Date: Fri, 10 May 2019 00:13:49 GMT
< X-Content-Security-Policy: default-src 'self'
...
bin/platform-admin.sh retrieve-events-service-logs
When the command is finished, a ZIP file named events-service.log.zip is created in the location from which you ran the script. You can then
extract the archive to troubleshoot or submit the archive for troubleshooting assistance to your AppDynamics representative. If the Enterprise Console
failed to connect to one of the cluster nodes to retrieve logs for any reason, the connection error is written to a log file included in the archive.
You can do so by creating the PEM file, as described in the discussion of configuring SSH passwordless login on Prepare the Events Service Host, and
using the following command to install the new PEM file.
The general steps for upgrading an Events Service deployment are as follows:
1. Upgrade the Controller. (See Upgrade the Controller Using the Enterprise Console.)
2. Apply the upgrade to the Events Service nodes using the following command:
bin/platform-admin.sh upgrade-events-service
The Enterprise Console checks whether the Events Service is up to date relative to the current Controller version and, if not, performs the update.
Machine Agent
2. Install both agents on each node in the cluster: first the Java Agent, then the Machine Agent.
3. On each node in the cluster, update the VM options for the Java Agent:
a. Open the following file in a text editor:
<controller_home>/platform_admin/events-service/conf/events-service.vmoptions
b. Add the following lines to the end of the file:
-javaagent:/opt/appdynamics/events-service/java_agent/ver4.5.0.0/javaagent.jar
-Dappdynamics.agent.accountName=<account_name>
-Dappdynamics.agent.applicationName=<events_service_app_name>
-Dappdynamics.controller.hostName=<controller_host>
-Dappdynamics.controller.port=443
-Dappdynamics.controller.ssl.enabled=true
-Dappdynamics.agent.nodeName=<events_service_node_name>
-Dappdynamics.agent.tierName=<events_service_tier_name>
Adjust the path to the Java Agent JAR, account name, and other values as appropriate.
The business application name (events_service_app_name) and tier name (events_service_tier_name) should
normally be the same for all nodes in an Events Service cluster, while each node must have a unique name (events_servic
e_node_name).
The Node Name and Tier Name for each Machine Agent should be the same as the events_service_node_name and events_ser
vice_tier_name that you specify on each node.
5. Restart the Events Service on all nodes in the cluster: on the Controller host, navigate to <controller_home>/platform_admin/events-
service/ and enter the following command: /bin/platform-admin.sh restart-events-service
6. In the Controller UI, go to the Applications table and open the dashboard for the events_service_app_name application. (You might need to wait a
few minutes for this application to appear as the Events Services on the nodes restart and begin sending data to the Controller.)
7. In the Application Dashboard, choose Configure > Instrumentation.
8. Select the events_service_tier_name tier and choose Use Custom Configuration for this Tier.
9. Under Custom Match Rules, create a new rule with the following attributes:
After the Events Service settings update, the Events Service will require a restart initiated from the Platform Admin CLI or Enterprise Console
GUI.
2. Make sure that the configuration is effective by running the following command:
This page takes you through the sample configuration for a load balancer for the Events Service. It introduces you to the concepts and requirements
around load balancing Events Service traffic.
To configure the load balancer, add the Events Service cluster members to a server pool to which the load balancer distributes traffic on a round-robin
basis. Configure a routing rule for the primary port (9080 by default) of each Events Service node. Every member of the Events Service cluster, primary
node or not, needs to be included in the routing rule. Keep in mind that increasing the size of the cluster will involve changes to the load balancer rules
described here.
The following figure shows a sample deployment scenario. The load balancer forwards traffic for the Controller and any Events Service clients, Analytics
Agents in this example.
No two environments are exactly alike, so be sure to adapt the steps for your load balancer type, operating systems, and other site-specific requirements.
1.
2. Add the following configuration to a new file under the Nginx configuration directory, for example, to /etc/nginx/conf.d/eventservice.
conf.
upstream events-service-api {
server 192.3.12.12:9080;
server 192.3.12.13:9080;
server 192.3.12.14:9080;
server 192.3.12.15:9080;
keepalive 15;
}
server {
listen 9080;
location / {
proxy_pass http://events-service-api;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
In the example, there's a single upstream context for the API-Store ports on the cluster members. By default, Nginx distributes traffic to the hosts
on a round-robin basis.
3. Check the following operating system settings on the machine:
Permit incoming connections in the firewall built into the operating system, or disable the firewall if it is safe to do so. On CentOS 6.6,
use the following command to insert the required configuration in iptables:
Disable if necessary selinux security enforcement by editing /etc/selinux/config and setting SELINUX=disabled. Restart the
computer for this setting to take effect.
4. Start Nginx:
sudo nginx
Nginx starts and now direct traffic to the upstream servers. If you get errors regarding unknown directives, make sure you have the latest version
of Nginx.
The following instructions describe how to set up SSL termination at the load balancer. These steps use HAProxy as the example load balancer. An
overview of the steps are:
The following diagram shows a sample deployment reflected in the configuration steps:
Before Starting
To perform these steps, you need:
For production use, AppDynamics strongly recommends the use of a certificate signed by a third-party CA or your own internal CA rather than a self-
signed certificate.
2. Create the certificate by running the following command, replacing <number_of_days> with the number of days for which you want the
certificate to be valid, such as 365 for a full year:
sudo openssl req -x509 -nodes -days <number_of_days> -newkey rsa:2048 -keyout ./events_service.key -out .
/events_service.crt
3. Respond to the prompts to create the certificate. For the Common Name, enter the hostname for the load balancer machine as identified by
external DNS (that is, the hostname that agents will use to connect to the Events Service). This is the domain that will be associated with the
certificate.
4. Put the certificate artifacts in a PEM file, as follows:
2. Generate a Certificate signing request (CSR) based on the private key. For example:
3. Submit the events_service.csr file to a third-party CA or your own internal CA for signing. When you receive the signed certificate, install it
and the CA authority root certificate.
4. Depending on the format of the certificates returned to you by the Certificate Authority, you may need to put the certificate and key in PEM format,
for example:
In the command, replace <ca_crt> with the certificate returned to you by the Certificate Authority. Include any intermediate CA certs, if present,
when creating the PEM file.
backend events_service_backend
mode tcp
balance roundrobin
server node1 192.3.12.12:9080 check
server node2 192.3.12.13:9080 check
server node3 192.3.12.14:9080 check
1. Transfer a copy of the signed certificate, events_service.crt, to the home directory (denoted as $HOME in the instructions below) of the
machine running the agent using Secure Copy (scp) or the file transfer method you prefer.
2. Copy the certificate file to the directory location of the trust store used by the agent:
cp $HOME/events_service.crt $JAVA_HOME/jre/lib/security/
3. Navigate to the directory and make a backup of the existing cacerts.jks file:
cd $JAVA_HOME/jre/lib/security/
cp cacerts.jks cacerts.jks.old
When prompted, enter the password for the truststore (default is changeit) and enter yes when asked whether to trust this certificate.
5. Verify that the certificate is in the truststore:
6. Navigate to the installation folder of the Analytics Agent and edit conf/analytics-agent.properties to change the value of the HTTP
endpoint property:
http.event.endpoint=https://<External_DNS_hostname_Load_Balancer>:9080
If the agent is operating normally, the healthy field is set to true, as in the following example:
1. Transfer a copy of the signed certificate, events_service.crt, to the home directory (denoted as $HOME in the instructions below) of the
machine running the Controller using Secure Copy (scp) or the file transfer method you prefer.
2. Navigate to the directory containing the Controller trust-store (as determined by the Controller startup parameter -Djavax.net.ssl.trustStore).
3. Make a backup of the existing cacerts.jks file:
cp cacerts.jks cacerts.jks.old
When prompted, enter the password for the truststore (default is changeit) and enter yes when asked whether to trust this certificate.
5. Verify that the certificate is in the truststore:
You can now verify that the Analytics UI is accessible and showing data.
This component is available for on-premises Controllers only. SaaS Controllers are managed by AppDynamics.
You must be connected to the AppDynamics Events Service to send data to it. AppDynamics uses API keys and connection URLs to establish connections
between components.
You can configure these connection settings on the Controller Settings page of the Admin Console (see Access the Administration Console). The
following sections describe the required connection settings for Database Visibility, Analytics, and End User Monitoring.
To connect these modules to the Events Service, open the Admin Console > Controller Settings and set values for the following properties:
Set appdynamics.on.premise.event.service.key to the corresponding key in the properties file for Database Visibility.
Set appdynamics.on.premise.event.service.url to the Events Service endpoint URL.
Set appdynamics.non.eum.events.use.on.premise.event.service to true.
Sample
analytics.enabled=true
analytics.serverScheme=http
analytics.serverHost=events.service.hostname
analytics.port=9080
analytics.accountAccessKey=1a59d1ac-4c35-4df1-9c5d-5fc191003441
The value of appdynamics.es.eum.key will automatically be set to the property analytics.accountAccessKey of the file <EUM_HOME>
/eum-processor/bin/eum.properties.
appdynamics.on.premise.event.service.proxy.host
appdynamics.on.premise.event.service.proxy.port
appdynamics.on.premise.event.service.proxy.user
appdynamics.on.premise.event.service.proxy.password.file
appdynamics.on.premise.event.service.proxy.use.ssl
If you are connecting EUM through a proxy, you must set values for the following properties:
appdynamics.controller.http.proxyHost
appdynamics.controller.http.proxyPort
appdynamics.controller.http.proxyUser
appdynamics.controller.http.proxyPasswordFile
Backing up Events Service data helps you to recover from hardware or another type of failure of an Events Service machine. A snapshot represents the
backed up data for the entire Events Service cluster. In addition to using it for failure recovery, you can use a snapshot to migrate Events Service to a new
instance.
The Events Service tool—events-service.sh for Linux and events-service.exe for Windows—includes commands for preparing the system for
backing up with snapshots, generating a snapshot, and restoring from a snapshot, as described below.
The following instructions show sample commands for Linux. If using Windows, be sure to use the events-service.exe form of the executable rather
than the .sh form, and adjust the sample directory paths as needed.
Only the first snapshot results in a full copy of the data. Each subsequent snapshot is incremental, applying only the changes since the last snapshot.
Backing up frequently, therefore, does not result in substantially more storage overhead than backing up infrequently.
The Events Service includes tools for setting up the snapshot repository. It supports the following snapshot repository location types:
A file system location that is shared among the Events Service nodes.
An Amazon S3 bucket
After choosing and preparing the system that will host the snapshot, set up each Events Service node as follows:
1. If using FS, mount the shared filesystem at the default location for the backup repository, <appd_home>/events-service/appdynamics-
events-service-backup. To change this backup location, set the the ad.es.backupmanager.path.repo setting in conf/events-
service-api-store.properties. Keep in mind, however, that changing the properties file requires a restart of the Events Service node to
have the change take effect.
2. Set up the repository on each node. Use the appropriate command for your repository type:
For shared file system:
The snapshot-configure-s3 command accepts additional optional arguments, including arguments for passing the access key and
secret key for S3. Run "bin/events-service.sh -h" to view all options.
Look for a message similar to the following to verify that the configuration succeeded:
Create a Snapshot
After setting up the repository, you can generate a snapshot of the Events Service data. If you are backing up a cluster, you only need to run the command
from one of the primary nodes. If you have more than one cluster, generate a snapshot for each one. Snapshots are cluster-specific.
Generate a snapshot:
You can use this command to script regular backups based on your backup policy. The following output indicates that backup was successful:
If you don't specify a snapshot ID, the command gets the status for the most recent snapshot. You can use the snapshot-list command to see a list of
available snapshots.
To restore a snapshot, use the snapshot-restore command, passing the properties file for the Events Service instance you are backing up. The
following shows an example with sample output:
Check the status of the snapshot restore using the snapshot-restore-status command. For example:
You can restore a specific snapshot by passing the snapshot ID with the command. Otherwise, the most recent snapshot is restored.
You can use the snapshot-list command to get a list of snapshot IDs.
The target Events Service needs to be a fresh installation; that is, data in two different Events Service instances cannot be merged. Be sure to avoid
configuring the Events Service URL with the new instance location in the Controller configuration until you have completed these steps.
On this page:
Related pages:
You can upgrade the Events Service either manually or by using the Enterprise Console.
For on-premises deployments, 4.5.2 is the latest version of the Events Service. If you upgrade to a version of the Events Service other than the latest, run
the Enterprise Console installer for the desired Events Service version.
AppDynamics removed Search Guard from the on-premises Events Service version 4.5.2.20561. If your deployment requires Search Guard or
a comparable feature, do not upgrade to this version of the Events Service.
AppDynamics will provide an alternative security feature with the next on-premises Events Service release.
To upgrade the Events Service software to a version earlier than 4.1, you must first manually upgrade the service to 4.1, and then use the
Enterprise Console to discover the Events Service nodes.
1. The Events Service process should have restarted—verify its health status in the Enterprise Console GUI.
2. Merge your backup copy of the events-service.vmoptions file into the new copy of the file created by the upgrade.
After an upgrade, you will find the Events Service startup script paths below:
<installDir>/appdynamics/events-service/processor/bin/events-service.sh
This page describes how to upgrade a scaled-out Events Service on primarily Linux machines using the Enterprise Console.
You do not need to stop the Events Service before upgrading because the Enterprise Console does this for you.
After you upgrade the Events Service, upgrade the EUM server if it is part of your deployment. Then, upgrade the Controller.
Upgrade the Events Service from 4.1.x, 4.2.x, and 4.3.x to 4.4.x or Latest
The Enterprise Console supports the installation of the Events Service on a Windows environment for a single node install. If you have several
remote nodes, you will need to do a manual upgrade of the Events Service cluster and set the keys manually. See Connect to the Events
Service
The Enterprise Console manages the Controller and Events Service together, so their keys found in events-service-api-store.
properties are set and synced to each other upon upgrade. However, we recommend confirming that the keys were synced correctly: appdy
namics.on.premise.event.service.key == ad.accountmanager.key.controller
To upgrade the Events Service from 4.1.x, 4.2.x, and 4.3.x to 4.4.x or the latest version, you can use the Discover and Upgrade feature:
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
The Installation Path is an absolute path under which all of the platform components are installed. The same path is used for
all hosts added to the platform. Use a path which does not have any existing AppDynamics components installed under it.
The path you choose must be writeable, i.e. the user who installed the Enterprise Console should have write permissions to
that folder. Also, the same path should be writable on all of the hosts that the Enterprise Console manages.
5. Add a Host:
Note that all services on Windows machines must be installed on the Enterprise Console host since the Enterprise Console does not
support remote operations on Windows. Therefore, you cannot add a host in a Windows Enterprise Console machine.
a. Enter the host machine-related information: Host Name, Username, and Private Key. This is where the Events Service will be upgraded.
Therefore, this needs to point to the host machine where the Events Service is currently up and running. For more information about how
to add credentials and hosts, see Administer the Enterprise Console.
6. Click Platforms. Select the newly created platform and navigate to the Events Service page.
7. Discover Events Service:
a. Select Discover & Upgrade Events Service.
b. Select an available Target Version from the dropdown.
The list is populated by versions that the Enterprise Console is aware of. This means that you can upgrade the Events
Service to any intermediate version or to the latest version as long as the Enterprise Console installer has been run for those
versions.
The Enterprise Console will onboard the Events Service on the selected host machine to the application build. When the Enterprise Console discovers a
component, it also checks to see if an upgrade is available and performs the upgrade. Plan for a downtime of the Events Service availability during this
time. You can view the status of the upgrade job on the Jobs page.
Once the upgrade is complete, the Events Service Health status and related information can be accessed from the Events Service page.
After upgrading to 4.4.x or the latest, the commands to start and stop the Events Service change. See Administer the Events Service for more
information.
This procedure should be used to upgrade the Events Service when an existing platform in the Enterprise Console is already managing Events
Service cluster nodes.
To upgrade the Events Service from 4.4.x to the latest version, you can use the Upgrade Events Service feature:
1. Check that you have fulfilled the Enterprise Console prerequisites before starting.
2. Upgrade the Enterprise Console to the latest version.
3. Open a browser and navigate to the GUI:
http(s)://<hostname>:<port>
The list is populated by versions that the Enterprise Console is aware of. This means that you can upgrade the Events Service to any
intermediate version or to the latest version as long as the Enterprise Console installer has been run for those versions.
You do not need to stop the Events Service before upgrading because the Enterprise Console does this for you.
After you upgrade the Events Service, upgrade the EUM server if it is part of your deployment. Then, upgrade the Controller.
Upgrade the Events Service from 4.1.x, 4.2.x, and 4.3.x to 4.4.x or Latest
To upgrade the Events Service software from 4.1.x, 4.2.x, and 4.3.x to 4.4.x or the latest version, you will need to first download and install the Enterprise
Console installer before performing the following steps:
The installation directory is the directory where the Enterprise Console installs all platform components.
To avoid any failures, do not use the 4.3 or earlier Platform Admin installation directory. Instead, provide a new/empty directory.
2. Add the SSH key that the Enterprise Console will use to access and manage the Events Service hosts remotely. (See Create the SSH Key for
more information):
<file path to the key file> is the private key for the Enterprise Console machine. The installation process deploys the keys to the
Events Service hosts.
3. Add hosts to the platform, passing the credential you added to the platform:
4. Discover the Events Service nodes that are not yet integrated:
This command integrates the nodes into the Enterprise Console and also upgrades them. If your upgrade fails, you can resume by passing the
flag useCheckpoint=true as an argument after --args.
After upgrading to 4.4.x or the latest, the commands to start and stop the Events Service change. See Administer the Events Service for more
information.
4. Apply the upgrade to the Events Service nodes with the following command:
If your upgrade fails, you can resume by passing the flag useCheckpoint=true as an argument after --args.
This page describes how to manually upgrade an Events Service. This is useful for when you did not use the Platform Administration Application or the
Enterprise Console to deploy the Events Service. The primary case for this would be for when you need to upgrade the Events Service nodes hosted on
remote Windows machines.
Prerequisite
1. Download the Events Service distribution, events-service.zip, from the AppDynamics download site to the Events Service machine.
2. Stop the Events Service processes:
bin/events-service.sh stop
If you are upgrading from 4.2 to 4.3.x or a later version you must edit the events-service-api-store.properties file by
replacing the port ranges [9300-9400] with :9300.
ad.es.node.unicast.hosts=node1.example.com[9300-9400],node2.example.com[9300-9400],node3.example.
com[9300-9400]
ad.es.node.unicast.hosts=node1.example.com:9300,node2.example.com:9300,node3.example.com:9300
6. Configure the connection to the Events Service in the Controller, and get the Controller key for the Events Service configuration as follows:
a.With the Controller running, open the Administration Console as the root user.
b.In the Controller Settings page, search for appdynamics.on.premise.event.service.url.
c.Replace the default value to the internal hostname for the Events Service machine and default Events Service listen port, 9080.
d.Search for an additional setting, the one you will need to enable the connection from the Events Service to the Controller, appdynamics
.on.premise.event.service.key.
e. Copy the value of the property to your clipboard. You will need to configure this in the Events Service properties file next.
7. Configure the connection from the Events Service to the Controller:
cd events-service
ad.accountmanager.key.controller=controller-key
8. Ensure that the following critical properties are configured appropriately in the events-service-api-store.properties file:
ad.accountmanager.keyNamesCSV=EUM,CONTROLLER,MDS,OPS,SLM,JF
ad.accountmanager.key.eum=
ad.accountmanager.key.controller=
ad.accountmanager.key.mds=
ad.accountmanager.key.ops=
ad.accountmanager.key.slm=
ad.accountmanager.key.jf=
ad.accountmanager.key.service=
ad.jvm.heap.min=1g
ad.jvm.heap.max=1g
ad.es.jvm.heap.min=1g
ad.es.jvm.heap.max=1g
9. Move ad.es.node.unicast.hosts property in events-services-api-store.properties while upgrading from 4.2 to 4.3.x or later.
10. Save and close the events-service-api-store.properties file.
11. Verify that the new Events Service home directory exists. The Event Service home directory is determined by the ad.es.path.home property in
the property file used to start up the Events Service.
If the directory does not exist, create it. For example, create the following directory: /opt/appdynamics/events-service/appdynamics-
events-service
12. Move (do not copy) the old Events Service data directory to the new Events Service home directory. For example:
mv /opt/appdynamics/events-service-backup/appdynamics-events-service/data /opt/appdynamics/events-service
/appdynamics-events-service/
13. Restart the Events Service processes from the new directory:
Verify that "Healthy" appears as the service status, indicating that the process is operating normally:
15. Configure the connections from the Analytics Agent, EUM Server, or Database Monitoring agents to the Events Service, as described in Connect
to the Events Service.
For information on performing these steps, see Install the Events Service on Windows.
If you upgrade to 4.4.x from 4.3.x or an earlier version, the commands to start and stop the Events Service change. See Administer the Events
Service for more information.
To use this procedure appropriately for your deployment, follow the guidelines below.
On-premises 4.5.2 and No action is required, but AppDynamics strongly recommends that you update data field names to prepare for future
older Events Service releases.
Update Overview
Version 4.5.3 of the Events Service runs Elasticsearch 5.6, whereas previous Events Services run earlier versions of Elasticsearch. To upgrade to Events
Service 4.5.3, you may need to rename some fields used in collecting transaction analytics, log analytics, or other types of data.
Review the requirements below and follow instructions where applicable. To understand the rationale for the changes, see More About Field Names.
Requirements
Required action: Give alphanumeric names to any fields whose names are empty.
Strongly recommended action: Replace dots in field names with hyphens or underscores.
Elasticsearch stores data in JSON documents whose structure is hierarchical. Dotted field names can be used to query into those JSON documents: the
field name is treated as a path whose components are separated by dots. See https://www.elastic.co/guide/en/elasticsearch/reference/2.4/dots-in-names.
html.
It can be impossible to know whether a dotted field name is intended as a path to a JSON element, or just a plain field name. To treat this problem in a
consistent way, Elasticsearch, beginning with version 5.6, always automatically expands dotted field names into hierarchical JSON structures. Each dot
creates another level nested lower in the hierarchy.
Events Service 4.5.3 attempts to gracefully handle dots in field names and allow the default behavior of Elasticsearch. However, in a few corner cases,
Elasticsearch still fails to index events to which field names correspond. In these situations, the only recourse is to change the field names.
"a.very.long.field.name.truncated.with.dots..." ->
"a": {
"very": {
"long": {
"field": {
"name": {
"truncated": {
"with": {
"dots": {
"": {
"": {
"": {
}
}
}
}
}
}
}
}
}
}
}
".a.b.c" ->
"": {
"a": {
"b": {
"c": "value of field"
}
}
}
"a.b.c." ->
"a": {
"b" : {
"c": {
"" : {
}
}
}
}
a.b = "alphabaker"
a.b.c = "alphabakercharlie"
all the nodes that need to be created are text nodes, and
some nodes need to be nested, but
in JSON, text nodes are not allowed to contain other objects.
We'll demonstrate the problem by examining what happens when Elasticsearch tries to create the JSON objects for our example.
The first field name results in "b" being mapped to a text node with the value "alphabaker."
"a": {
"b": = "alphabaker"
}
To expand the second name, Elasticsearch tries to map "c" to a text node with the value "alphabakercharlie." This fails because "c" needs
to be nested within "b," which is a text node and cannot contain nested objects.
which python3
If python3 is not found, then install python3 by entering: sudo yum install python36 -y
which pip3
If the pip3 package manager is not found, then see Installing Packages and enter: sudopython3 -m pip install--upgrade
pip setuptools wheel
3. To install the libraries which run the migration python script, enter this command within the Data Migration tool directory:
Clusters
Clusters are defined as a collection of Events Service clusters. Each cluster has the following properties:
Properties Description
api_url URL pointing to one of the Events Service's API nodes or its load balancer.
check_hostname (Optional) Indicates whether to check the hostname when verifying the certificate. Default is true.
es_url_internal Internal URL pointing to one of the Elasticsearch's master nodes. Used by other clusters for remote re-indexing.
keys Controller and OPS keys for Events Service. These keys are located in the conf/events-service-api-store.properties
file.
es6 has SSL enabled, while xpack_es6 has Elasticsearch X-Pack enabled.
{
"clusters": {
"es2": {
"keys": {
"CONTROLLER": "27410b11-296a-49e1-b2d2-d2371ab94d64",
"OPS": "45c25bad-636c-432f-b0bd-f8ec428c8db4"
},
"api_url": "35.162.126.253:9080",
"es_url": "35.162.126.253:9200",
"es_url_internal": "172.31.12.185:9200",
"es_version": 2
},
"es6": {
"keys": {
"CONTROLLER": "7db43bff-97d3-4d5e-828a-a2eacb693e07",
"OPS": "ac7424d1-ae96-4e10-ad82-a2eca50db133"
},
"api_url": "https://34.209.245.68:9080",
"certificate_file": "/Users/jun.zhai/es6.pem",
"check_hostname": false,
"es_url": "34.209.245.68:9200",
"es_version": 6
},
"xpack_es6": {
"keys": {
"CONTROLLER": "07b055f8-a97b-4ccb-a239-2267d452c4ea",
"OPS": "c582cc8a-f3bc-419f-a42a-7bb8adad05b8"
},
"api_url": "52.89.86.93:9080",
"es_url": "http://elastic:1234@52.89.86.93:9200",
"es_version": 6
}
}
}
Migration
This section describes the migration properties:
Properties Description
accounts (Optional) Specify which accounts to migrate. Defaults to everything in source Events Service.
remote_reindex_sc Batch size for remote reindex. Default is 8000. See Re-index API.
roll_batch_size
reindex_task_poll Frequency in seconds of how often to check the status of ongoing remote reindex tasks. Default is 60 seconds.
ing_interval
starting_max_fiel Maximum fields allowed when creating a new index. Default is set to 1000. The value should be same as the ad.es.event.
ds_per_index index.startingMaxFieldsPerIndex in conf/events-service-api-store.properties file.
{
"migration": {
"search_hits": 5000,
"remote_reindex_concurrency": 6,
"remote_reindex_scroll_batch_size": 8000,
"reindex_task_polling_interval": 60,
"starting_max_fields_per_index": 1000
}
}
Example 2: Migrate all event types in accounts, customer15_9611293a-c56f-4c9a-aa11-9f6bffcb42ce, log_v1, and custom_event event
types in account customer1_229f6fbf-b42f-4d66-a56b-a2324d8b169d. This example does not migrate any other accounts.
{
"migration": {
"accounts": {
"customer15_9611293a-c56f-4c9a-aa11-9f6bffcb42ce": [],
"customer1_229f6fbf-b42f-4d66-a56b-a2324d8b169d": [
"log_v1",
"custom_event"
],
},
"search_hits": 5000,
"remote_reindex_concurrency": 4,
"remote_reindex_scroll_batch_size": 8000,
"reindex_task_polling_interval": 60,
"starting_max_fields_per_index": 1000
}
}
Upgrade Procedure
To upgrade the On-premises Events Service to >= 20.9.0, AppDynamics recommends that you follow this procedure.
The Events Service version 20.9.0 is packaged with the Enterprise Console version 21.2.4 or newer.
If the incoming data load is heavy, you may expect delays in Step 3d during the data migration process.
Data Migration
If you choose not to migrate data, then continue with Steps 1 through 3c, and do not complete Step 3d.
If you choose to migrate your data then data conflicts in may occur in Step 3d.
Upgrade Completion
If the upgrade fails, then you should revert the Controller to the old cluster; data loss may occur.
If the upgrade succeeds, then you can delete the old cluster.
Make sure that the older (source) Events Service cluster, new (target) Events Service cluster, and utility machine all reside on
the same network.
b. Ensure the target Events Service cluster has similar or better hardware than that of the source Events Service cluster. Ensure that the
operating system (OS) on the new machine has these settings:
i. Increase the file descriptors limit in the /etc/security/limits.conf file, and then reboot the machine. Use ulimit -n to
verify the value.
ii. Set vm.max_map_count in the /etc/sysctl.conf file, and then reboot the machine. Use cat /proc/sys/vm
/max_map_count to verify the value.
vm.max_map_count=262144
ad.es.node.http.enabled=true
- className: com.appdynamics.analytics.processor.elasticsearch.configuration.
ElasticsearchConfigManagerModule
properties:
nodeSettings:
cluster.name: ${ad.es.cluster.name}
...
indices.fielddata.cache.size: ${ad.es.fielddata.cache.size}
reindex.remote.whitelist: "<IP address to one of the nodes in older cluster>:
9200"
iii. Restart the new Events Service cluster from the Enterprise Console.
3. Migrate the data:
a. See Use the Data Migration Tool to install and configure the data migration script on the utility instance.
b. Enter this command to migrate the metadata:
c. Navigate to the Controller Admin page and set your Controller to the new Events Service. See Connect to the Events Service.
d. Enter this command to migrate data from the old cluster to the staging cluster:
You can remove an on-premises Events Service from individual nodes or all at once. An embedded Events Service is uninstalled along with the Controller.
To do so through the CLI, you can use the uninstall-events-service command, which removes the Events Service software and data from all
cluster nodes:
After uninstalling Events Service, the only trace of the Events Service remaining on the host may be a file named orcha-modules.log. It appears in the
logs directory at the former installation root directory. To remove all traces of the Events Service, manually remove the log file after removing the Events
Service with the Enterprise Console.
To uninstall the Events Service from a single node with the Enterprise Console, see the Removing a Node section on Administer the Events Service.
1. Use the list service command to find the service name for the Events Service: bin\events-service.exe service-list
Starting the ZooKeeper alone only brings up the process that manages index rollover. The Events Service node is not fully started until you start
the API-Store process as well, as described next.
2. Use the name returned for the service as the -s parameter argument to the following command: bin\events-service.exe service-
uninstall -s "<Name from service-list>"
The Synthetic Server dispatches and processes requests and depends on Synthetic Agents for executing and reporting measurements.
The Synthetic Server receives synthetic job requests from the Controller and then the jobs are fetched from the Synthetic Services by the Synthetic
Agents. Once the measurement results are received from the Synthetic Agents, the Synthetic Server stores, processes, and transmits the results to the
EUM Server.
Installation Overview
To set up a complete on-premises Synthetic Server deployment, therefore, you need to:
1. Install the on-premises Controller or prepare an in-service Controller to work with the EUM Server.
2. Install the on-premises Events Service and configure it to work with your on-premises Controller.
3. Install the on-premises EUM Server and configure it to work with your Events Service and Controller.
4. Install the on-premises Synthetic Server and configure it to work with the EUM Server and the Controller.
5. Install and configure one or both types of Synthetic Agents.
6. Secure the Synthetic Server (recommended).
7. Monitor the Synthetic Server (recommended).
Synthetic Scheduler
Synthetic Shepherd
Synthetic Feeder Client
This document also discusses the Synthetic Server Feeder, which communicates with the Synthetic Client Feeder, but is only a service of the
SaaS Synthetic Server.
Synthetic Scheduler
The first service is the Synthetic Scheduler, which is a cron-like service that sends job requests at configured intervals. The Synthetic Scheduler handles
the CRUD operations for jobs and manages the events generated for synthetic warnings and errors that occur in the measurement results. The Synthetic
Scheduler also validates the beacons, triggers warning and error events if needed, and forwards the beacons to the EUM Server.
Synthetic Shepherd
The second service is Synthetic Shepherd. This service manages and dispatches jobs to the Synthetic Agents. In addition, the Synthetic Shepherd saves
the measurement results to the filesystem and sends beacons containing the data to the Synthetic Scheduler.
Synthetic Agents
When deploying the on-premises Synthetic Server, you can deploy one or both Synthetic Agent types:
Synthetic Hosted Agents - Synthetic Agents that are hosted and maintained by AppDynamics
Synthetic Private Agents - Synthetic Agents that you install, configure, run, and maintain in your infrastructure
Synthetic Hosted
Agent Access to a fleet of geographically distributed agents
Reduced ownership/resource costs: no hardware or cloud computing costs
Ease-of-use: no need to deploy/configure/manage agents
Scalability: Synthetic Hosted Agents are only deployed when needed, and more agents are readily available if the
workload increases
Synthetic Private
Agent Monitoring of internal sites and services that are not publicly accessible
Complete control over the agent configurations and environment
Synthetic Agents N/A The Synthetic Agents do not listen on any port. They only temporarily open internal
Fetches jobs from the Synthetic random ports to fetch job requests from the Synthetic Shepherd and to send the
Shepherd. measurement results of executed jobs to the Synthetic Shepherd.
Uses WebDriver and Selenium to
execute these jobs on browsers.
Registers with Synthetic Shepherd.
Uploads screenshots to Synthetic
Shepherd.
Handles communication with Synthetic
Shepherd.
MySQL EUM
Information about the entire agent fleet. Server's
Measurement request queues. MySQL
In-flight and archived measurements. database
Schedules: a schedule is a configuration according to which the Synthetic Scheduler sends measurement requests
to the Synthetic Shepherd. Those requests are queued until enough Synthetic Agents are available to process them,
at which point they are dequeued and become measurements.
File Host
System Resource snapshots machine of
Script output the Synthetic
Measurement results Server
Screenshots
Synthetic Server Deployment Architecture
The following sections describe and provide diagrams of the different on-premises Synthetic Server deployments. The diagrams show the connections and
data flow between the components of the deployments. For information about the other AppDynamics platform components, see Platform Components and
Platform Connections.
HTTP(S)
When a user creates a synthetic job, the Controller sends a request for the job with its configured on- 1210
premises 1
frequency to the on-premises Synthetic Server. The synthetic jobs are then placed in a queue. Synthetic /12102
Server 1010
1
/1010
2
HTTP(S) 10101
The Synthetic Private Agent fetches the job requests from the Synthetic Server and then executes on- /10102
them on a browser using Selenium. premises
Synthetic
Server
HTTP(S) 10101
The Synthetic Private Agent then sends the measurement results to the Synthetic Server. on- /10102
premises
Synthetic
Server
HTTP(S) 7001/7002
The on-premises Synthetic Server stores some data on file, and then processes and converts the on-
premises JDBC 3388
data into a beacon, which is then transmitted to the EUM Server through the EUM API. The EUM Server
Synthetic Server also writes data to the EUM Server's MySQL database.
HTTP(S) 7001/7002
The Controller polls the EUM Server for the measurement results and displays them in the on-
Synthetic Sessions. premises
EUM Server
HTTP(S) 12101
When a user creates a synthetic job, the Controller sends a request for the job with its configured on- /12102
premises
frequency to the on-premises Synthetic Server. The job requests are then placed in a queue. Synthetic
Server
HTTP(S) 7001/7002
The on-premises Synthetic Server sends the job requests to the SaaS EUM Server. SaaS
EUM Server
WebSocket 16001
The Synthetic Hosted Agents fetch the job requests from the SaaS Synthetic Server and then SaaS (encrypted)
executes them on a browser using Selenium. Synthetic
Server
HTTP(S) 10001/100
The Synthetic Hosted Agents send the measurement results to the SaaS Synthetic Server. SaaS 02
Synthetic
Server
WebSocket 16101
The SaaS Synthetic Server Feeder sends the measurement results to the on-premises on-prem (encrypted)
Synthetic Client Feeder. Synthetic
Server
HTTP(S) 7001/7002
The on-premises Synthetic Server stores some data on file, and then processes and converts the on-
premises JDBC 3388
data into a beacon, which is then transmitted to the on-premises EUM Server through the EUM API. EUM Server
The on-prem Synthetic Server also writes data to the EUM Server's MySQL database.
HTTP(S) 7001/7002
The Controller polls the on-premises EUM Server for the measurement results and displays them in on-
the Synthetic Sessions. premises
EUM Server
This page lists the Synthetic Server requirements, offers sizing guidance, and shows you how to modify the default settings.
Synthetic Agent
Synthetic Private Agent 20.3 or higher
Synthetic Hosted Agent 20.3 or higher
Certain Synthetic Server features—specifically, Synthetic Sessions Analytics, features of Application Analytics that extend the functionality of
Synthetic Sessions—require access to the AppDynamics Events Service.
Synthetic Private Agents See Requirements for the Synthetic Private Agent.
Hardware Requirements
These requirements assume that the Synthetic Server is installed on a separate machine. If other AppDynamics platforms are installed on the same
machine, the requirements (particularly for memory) could vary greatly and require many more resources.
NTP should be enabled on both the EUM Server host and the Controller machine. The machine clocks need to be able to synchronize.
Scaling Requirements
You are required to have one EUM account for each on-premises deployment of the Synthetic Server. The machine hosting the Synthetic Server should be
able to support 100 concurrent Synthetic Agents or 10 locations with 10 Synthetic Agents per location.
If you need the Synthetic Server to support more than 100 concurrent Synthetic Agents, see Increase the Synthetic Agent Support.
You can use the following file systems for machines that run Linux:
ZFS
EXT4
XFS
On-premises deployments on Linux are only supported on Intel architecture. Windows is not supported at this time.
Network Requirements
The network settings on the operating system need to be tuned for high-performance data transfers. Incorrectly tuned network settings can manifest
themselves as stability issues.
The following command listing demonstrates tuning suggestions for Linux operating systems. As shown, AppDynamics recommends a TCP/FIN timeout
setting of 10 seconds (the default is typically 60), the TCP connection keepalive time to 1800 seconds (reduced from 7200, typically), and disabling TCP
window scale, TCP SACK, and TCP timestamps.
The commands demonstrate how to configure the network settings in the /proc system. To ensure the settings persist across system reboots, be sure to
configure the equivalent settings in the etc/sysctl.conf or the network stack configuration file appropriate for your operating system.
Software Requirements
The Synthetic Server requires the following software to run and function correctly. You are required to have outbound internet access to install Python, pip,
and flake8.
Java 8 The Synthetic Server requires JDK 8 to run services such as Synthetic Scheduler and Synthetic Shepherd.
You need to set the environmental variable JAVA_HOME to the home directory of the JDK.
If the machine where you're installing the Synthetic Server does not have Internet access, run the following steps to
fetch and install flake8:
mkdir ~/flake8
d. Copy flake8.tgz to the $HOME directory of the host machine of the Synthetic Server.
2. From the host of the Synthetic Server that has no internet access, but does have pip installed:
a. Unzip and extract the flake8.tgz file:
libaio N/A The Synthetic Server requires the libaio library to be on the system. This library facilitates asynchronous I/O
operations on the system.
The following table provides instructions on how to install libaio for some common flavors of the Linux operating system. Note, if you have a NUMA
based architecture, then you are required to install the numactl package.
Red Hat and CentOS Use yum to install the library, such as:
Debian Use a package manager such as APT to install the library (as described for the Ubuntu instructions above).
You run the Synthetic Server installer from the command line. The installer relies on theinputs.groovy file to configure the network connections to the
on-premises EUM Server, and if you are using Synthetic Hosted Agents, to the SaaS EUM Server and SaaS Synthetic Server as well.
Successfully deployed the Controller, EUM Server, and the Events Service.
Downloaded the Synthetic Server installer package from the AppDynamics Download Center. The installer package will be listed on the
Downloads site as "Synthetic Server (zip)".
3. From the MySQL monitor, grant privileges to the MySQL user root of the Synthetic Server machine. The installer will use the MySQL user root t
o update the EUM database schema. Be sure to replace <on-prem-synthetic_server_hostname> with the URL to your Synthetic Server.
The MySQL root user from the Synthetic Server is not related to the Linux user account that is installing the Synthetic Server. For
example, the Linux user account ubuntu can run the installer, but the installer will use the MySQL user root when connecting to the
EUM Server MySQL database to update the database schema.
4. You will also need to grant access to the MySQL user eum_user to write data to the EUM database (eum_db). Be sure to replace <on-prem-
synthetic_server_hostname> with the URL to your Synthetic Server.
5. Set the password for the MySQL user root. The password should be the same as the one specified by the db_root_pwd in the inputs.
groovy file.
db_host Assign the URL to the machine hosting your on-premises EUM Server to the d The public DNS to the machine hosting the
b_host property. EUM Server.
db_port Change the value to "3388". This is the default port for the EUM Server's The port that the EUM Server's MySQL
MySQL database. database is listening on.
db_usern Change the value to eum_user. This is the default user for the EUM Server. The MySQL user that accesses the EUM
ame Server's MySQL database.
db_passw Assign the password for the MySQL user eum_user to remotely access the The password that you set for the user that is
ord EUM Server's MySQL database. specified by db_username. The value of db_u
sername should be eum_user.
collecto Assign the public DNS to the machine hosting your on-premises EUM Server The public DNS to the machine hosting the
r_host to the collector_host property. EUM Server.
collecto Confirm that the value is "7001". This is the default port of the EUM Server. The port that the EUM API Server and EUM
r_port Collector are listening on. The default is '7001'.
key_stor Assign the key store password you set when installing your on-premises EUM The key store password you set during the
e_passwo Server to the key_store_password property. installation of the EUM Server.
rd
localFil Assign a file path where you want the Synthetic Server to store data to the loc The path where the Synthetic Server stores
eStoreRo alFileStoreRootPath property. The Synthetic Server must be able to read data such as the measurement results and the
otPath and write to the path and the files in the path. screenshots.
unix/deploy.sh install
In the output from the install command, you should see the log of completed tasks similar to the following:
If you have jps installed, you can also just run it to verify the Synthetic Server are running.
2. Verify that the Synthetic Scheduler and Synthetic Shepherd are listening on the default ports:
3. With mysql installed on the Synthetic Server machine, you can verify that the Synthetic Server machine can connect to the EUM Server MySQL e
um_db database:
4. If you cannot connect to the EUM Server MySQL database, return to Grant Privileges to the EUM Server MySQL Database and complete the
steps again.
Related pages:
For the Synthetic Server to function correctly, you need to set the URLs and ports of Synthetic Scheduler and Synthetic Shepherd in the Controller
Administration Console.
eum.synthetic.onprem. The flag for enabling on-premises Synthetic Server. This should be true. true
installation
eum.synthetic. The URL and port to the Synthetic Scheduler on the machine hosting the http://<synthetic-
scheduler.host Synthetic Server. The default port is 12101. server-domain>:12101
eum.synthetic. The URL and port to the Synthetic Shepherd on the machine hosting the http://<synthetic-
shepherd.host Synthetic Server. The default port is 10101. server-domain>:10101
4. Your configurations for the Synthetic Server should be similar to those in the screenshot of the Controller Configurations pane below.
The information and instructions below are intended for On-Premise deployments only. SaaS deployments are managed by
AppDynamics.
Related pages:
The Synthetic Private Agent fetches jobs from and reports measurement results to the Synthetic Shepherd service of the Synthetic Server. Thus, you must
correctly configure the Synthetic Agent so it can connect to Synthetic Shepherd.
To connect the Synthetic Agent to on-prem Synthetic Server, follow the steps below:
1. Stop the Synthetic Private Agent if it's running by double-clicking the desktop icon .
2. Change to the directory C:\appdynamics\synthetic-agent\synthetic-driver\conf.
3. Edit the file synthetic-driver.yml.
4. Assign your EUM account and the license key to the properties eumAccount and licenseKey, and the URL to Synthetic Shepherd to shepher
dUrl as shown below:
## Use the URL to your Synthetic Server and the port to the Synthetic Shepherd (10101)
shepherdUrl: http://<on-prem-synthetic-server-host>:10101
## You can get the values for this from the Controller Admin Console > Controller Settings
## or the properties 'property_eum-account-name' and 'property_eum-license-key' from your license file.
privateClient:
eumAccount: "<eum_account>"
licenseKey: "<license_key>"
Related pages:
For your on-premises Synthetic Server to use Synthetic Hosted Agents, you need to configure the on-premises Synthetic Server to communicate with the
SaaS EUM Server and SaaS Synthetic Server.
For a complete list of Synthetic Hosted Agent browser locations, containers, and providers, go to Synthetic Agent Locations.
property_eum-hmac-key=1a88392cb0004b45b555a854b80f23f5
The HMAC key is used to authenticate your on-prem deployment to the SaaS EUM Server and the SaaS Synthetic Server. Without the HMAC key, your
on-premises Synthetic Server will not be able to use Synthetic Hosted Agents.
1. Copy the license file to the Controller home directory. After moving the license file, allow up to 5 minutes for the license change to take effect.
2. Follow the instructions given in Provision EUM Licenses based on your deployment.
feeder_server_url = "wss://synthetic-feeder.api.appdynamics.com"
saas_cloud_api_url = "https://api.eum-appdynamics.com"
You can use the command line to perform platform administration tasks with the Synthetic Server, such as starting and stopping the services. This page
describes the available commands, the log files, and the endpoints for testing the reachability and health of the Synthetic Server. Run the commands from
the root directory of the Synthetic Server home.
unix/deploy.sh start
To check if the Synthetic Server services are running and accessible, run the following command and confirm that there is output:
unix/deploy.sh stop
To increase the maximum number of requests per second that the Synthetic Server can receive:
throttleConfiguration:
maxRequestsPerSecondOverall: 80
4. Edit the file synthetic-processor/conf/synthetic-scheduler.yml and increase the value for the property maxRequestsPerSecondO
verall. Again, in this example configuration, the value is increased to 80.
throttleConfiguration:
maxRequestsPerSecondOverall: 80
unix/deploy.sh stop
unix/deploy.sh start
6. You can verify that the settings have been updated by checking the logs for the Synthetic Server you changed:
unix/post_deploy.sh
See Create Preset Dashboards and Health Rules to learn more about the preset health rules and dashboards.
curl <on-prem-synthetic_server_url>:10102/ping
curl <on-prem-synthetic_server_url>:12102/ping
curl <on-prem-synthetic_server_url>:16102/ping
curl <on-prem-synthetic_server_url>:10102/healthcheck?pretty=true
curl <on-prem-synthetic_server_url>:12102/healthcheck?pretty=true
curl <on-prem-synthetic_server_url>:16102/healthcheck?pretty=true
If the Synthetic Server is healthy, the response should be similar to the following:
pretty=true pretty=true
{ {
"authentication" : { "authentication" : {
"healthy" : true "healthy" : true
}, },
"deadlocks" : { "cluster" : {
"healthy" : true "healthy" : true
}, },
"httpClient" : { "deadlocks" : {
"healthy" : true "healthy" : true
}, },
"quartzScheduler" : { "linter" : {
"healthy" : true "healthy" : true
} },
"quartzSynthBackgroundScheduler" : {
"healthy" : true
},
curl <on-prem-synthetic_server_url>:16102/healthcheck? "quartzSynthJobScheduler" : {
"healthy" : true
pretty=true
}
{
"deadlocks" : {
"healthy" : true
}
<installDir>/logs/synthetic-scheduler.err
<installDir>/logs/synthetic-shepherd.err
<installDir>/logs/synthetic-feeder-client.err
<installDir>/logs/scheduler
<installDir>/logs/shepherd
<installDir>/logs/feeder-client
The naming convention for the general log files is <log>-YYYY-MM-DD.log. You will need to set up a policy to archive or delete the general log files to
prevent running out of disk space.
You are recommended to configure the Synthetic Server to use SSL to secure network connections. This page describes how to create a custom keystore
and then configure the Synthetic Server to use it to implement SSL
keytool
openssl
cd <synthetic_server_root>
3. Create a new keystore with a new unique key pair that uses RSA encryption:
This creates a new public-private key pair with an alias of "synthetic-server". You can use any value you like for the alias. The "first and last
name" required during the installation process becomes the common name (CN) of the certificate. Use the name of the server.
4. Configure the keystore by entering the information requested at the command prompt.
5. Specify a password for the key store. You need to configure this password in the Synthetic Server configuration file later.
6. Generate a certificate signing request (CSR):
This generates a certificate signing request based on the contents of the alias; in the example, it is "synthetic-server".
The following steps are a continuation of the process from Create a Certificate and Keystore:
1. Send the output file from the last step (/tmp/synthetic-server.csr in this example) to a Certificate Authority for signing.
2. Install the certificate for the Certificate Authority used to sign the .csr file:
This command imports your CA's root certificate into the keystore and stores it in an alias called "myorg-rootca".
3. Install the signed server certificate as follows:
This command imports your signed certificate over the top of the self-signed certificate in the existing alias; in the example, it is "synthetic-server".
4. Import the root certificate to the other platform components connecting to the Synthetic Server through HTTPS:
server:
...
applicationConnectors:
- type: https
port: <port>
keyStorePath: <path to JKS files>
keyStorePassword: <jks file password>
validateCerts: false
If you don't already have a signed certificate, see Create and Sign an RSA Security Certificate.
2. Edit the Synthetic Shepherd configuration file at <installation directory>/conf/synthetic-shepherd.yml and add the applicatio
nConnectors object shown below under server:
server:
...
applicationConnectors:
- type: https
port: <port>
keyStorePath: <path to jks file>
keyStorePassword: <jks file password>
validateCerts: false
Related pages:
This page describes how to configure your on-prem Synthetic Server to use a proxy server to communicate with the SaaS EUM Server and SaaS
Synthetic Server. You can set up a proxy to add a security layer for your on-prem Synthetic Server.
HTTP(S) 7001/7002
The on-prem Synthetic Scheduler sends the job requests to the proxy server, which then SaaS EUM
Server
forwards the requests to the SaaS EUM Server
HTTP(S) N/A
The SaaS EUM Server forwards the job requests to the SaaS Synthetic Server. SaaS
Synthetic Server
The on-prem Synthetic Server Client Feeder communicates with the SaaS Synthetic Server Feeder through a bi-directional WebSocket connection that is
not made through the proxy server.
HTTP(S) 7001/7002
The on-prem Synthetic Server (Scheduler) sends the job requests to the proxy server, when SaaS
EUM Server
then forwards the requests to the SaaS EUM Server.
HTTP(S) N/A
The SaaS EUM Server forwards the requests to the SaaS Synthetic Server. SaaS
Synthetic
Server
HTTP(S) 80/443
The on-prem Synthetic Server (Client Feeder) establishes a WebSocket connection with the SaaS
Synthetic
SaaS Synthetic Server (Synthetic Server Feeder) through the proxy server. Server
HTTP(S) 80/443
on-prem
Synthetic
Server
WebSocket 16101
The SaaS Synthetic Server (Synthetic Server Feeder) and on-prem Synthetic Server (Client on-prem
Synthetic
Feeder) communicate bi-directionally through the proxy server. Most of the traffic is from the SaaS Server
Synthetic Server Feeder to the on-prem Synthetic Client Feeder.
WebSocket 16001
SaaS
Synthetic
Server
Synthetic Shepherd
From the host machine of the Synthetic Server, set the proxyUrl and proxyPort properties in the <synthetic-server_installation_dir>
/synthetic-processor/conf/synthetic-shepherd.yml file to point to the URL and port of the proxy server, which the Synthetic Shepherd will
send requests.
saasLink:
proxyUrl: "127.0.0.1"
proxyPort: 3128
saasLink:
proxyUrl: "127.0.0.1"
proxyPort: 3128
websocketConfiguration:
proxyUrl: "127.0.0.1"
proxyPort: 3128
Note
Feeder Client for Synthetic services does not support Proxy authentication.
Squid
1. Add the following configurations to the Squid configuration file at /etc/squid/squid.conf.
2. Restart squid.
AppDynamics recommends that you monitor the performance of the Synthetic Server with the Java Agent. Once you have instrumented the Synthetic
Server, you can use preset capacity monitoring dashboards and health rules or create custom dashboards and health rules based on JMX and the
Synthetic Server metrics.
1. Change to <agent_home>/conf/.
2. Edit the controller-info.xml file so that the values for the following elements match your Controller information, application name, tier
name, and node name:
<controller-host>
<controller-port>
<application-name>
<tier-name>
<node-name>
3. For example, your controller-info.xml file might look similar to the following:
<controller-info>
<controller-host>192.168.1.20</controller-host>
<controller-port>8090</controller-port>
<application-name>SyntheticServer</application-name>
<tier-name>SchedulerTier</tier-name>
<node-name>SchedulerNode</node-name>
</controller-info>
You must ensure that the application name, tier name, and node name are the same as the javaagent.jar parameters when you
attach the Java Agent to the Synthetic Server.
In the inputs.groovy file, make sure you have set the following properties. Replace placeholders in brackets with information about your Controller as
well as the application and tier that are being monitored.
1. Set the options for the Synthetic Scheduler so that the Java Agent is attached to the JVM process:
SCHEDULER_OPTS="-javaagent:./java_agent/javaagent.jar -Dappdynamics.agent.applicationName=synthonprem -
Dappdynamics.agent.nodeName=synthetic-scheduler -Dappdynamics.agent.tierName=scheduler-tier"
2. Set the options for the Synthetic Shepherd so that the Java Agent is attached to the JVM process:
SYNTHETIC_SHEPHERD_OPTS="-javaagent:./java_agent/javaagent.jar -Dappdynamics.agent.
applicationName=synthonprem -Dappdynamics.agent.nodeName=synthetic-shepherd -Dappdynamics.agent.
tierName=shepherd-tier"
3. Set the options for the Synthetic Feeder Client so that the Java Agent is attached to the JVM process:
FEEDER_CLIENT_OPTS="-javaagent:./java_agent/javaagent.jar -Dappdynamics.agent.
applicationName=synthonprem -Dappdynamics.agent.nodeName=synthetic-feeder-client -Dappdynamics.agent.
tierName=feeder-client-tier"
export SYNTHETIC_SHEPHERD_OPTS
export SCHEDULER_OPTS
export FEEDER_CLIENT_OPTS
5. From the Synthetic Server installer directory, run the following to stop and start the Synthetic Server:
unix/deploy.sh stop
unix/deploy.sh start
tail <java-agent>/ver<agent-version>/logs/<node-name>/agent-XXXX.log
unix/post_deploy.sh
4. From the Controller UI, navigate to the Dashboards & Reports page.
5. You should see the dashboard Synthetic Private Agent Capacity Monitoring as shown here:
Custom Dashboards
Configure Health Rules
Related pages:
Port Settings
EUM Server Endpoints
The Synthetic Server has different endpoints serving distinct functions. This page provides a reference for testing the health and getting information about
on-premises Synthetic Servers.
Synthetic Shepherd - manages and dispatches jobs to the Synthetic Agents. In addition, the Synthetic Shepherd saves the measurement results
to the filesystem and sends beacons containing the data to the Synthetic Scheduler.
Synthetic Scheduler - handles the CRUD operations for jobs and manages the events generated for synthetic warnings and errors that occur in
the measurement results.
Synthetic Feeder Client - communicates with the SaaS Synthetic Feeder Server to access the Synthetic Hosted Agents.
This page describes how to upgrade the Synthetic Server to the latest version. This is often done alongside an upgrade to the other platform components,
such as the EUM Server, the Controller, and the Events Service.
The upgrade steps will not update 3rd-party software such as the Python library Flake8.
1. Back up your current installation. There is no way to downgrade and changes are permanent.
2. Update platform components in the correct update order.
Upgrade Procedure
Follow the instructions below to upgrade your Synthetic Server.
Pre-requisite
The previous version of the Synthetic services database should be maintained.
Upgradation Steps
To upgrade Synthetic services first go to the installed Synthetic services path cd <Synthetic_home> and follow the steps given here:
"/unix/deploy.sh stop"
2. Take a back up of inputs.grovy file from <Synthetic_home> before deleting contents of <Synthetic_Home>. This is done to preserve
previous configuration details.
3. Delete all the files and folders under <Synthetic_home>.
4. Copy and unzip the new Synthetic services installer under <Synthetic_home>.
5. Replace inputs.groovy.sample file with the backed up file (mentioned in step 2).
6. Run the following command to install new services:
"./unix/deploy.sh install"
curl <on-prem-synthetic_server>:10101/version
curl <on-prem-synthetic_server>:12101/version
curl <on-prem-synthetic_server>:16101/version
AppDynamics includes security features that help to ensure the safety and integrity of your deployment.
While the security features of the Controller are enabled out of the box, there are some steps you should take to ensure the security of your deployment.
These steps include but are not limited to:
The SSL port uses a self-signed certificate. If you intend to terminate SSL connection at the Controller, you should replace the default certificate
with your own, CA-signed certificate. If you replace the default SSL certificate on the Controller, you will also need to establish trust for the Controll
er's public key on the App Agent machine.
As an alternative to terminating SSL at the Controller, you can put the Controller behind a reverse proxy that terminates SSL, relieving
the Controller from having to process SSL.
Along with a secure listening port, the Controller provides an unsecured, HTTP listening port as well. You should disable the port or block access
to the point from any untrusted networks.
Make sure that your App Agents connect to the Controller or to the reverse proxy if terminating SSL at a proxy, with SSL enabled.
The Controller and underlying components, Glassfish and MySQL, include built-in user accounts. Be sure to change the passwords for the
accounts regularly and in general, follow best practices for password management for the accounts. For information on changing the passwords
for built-in users, see Update the Root User and Glassfish Admin Passwords.
If clients use SSL, the reverse proxy can terminate SSL connections or maintain SSL through to the Controller. Terminating SSL at the proxy removes the
processing burden from the Controller machine.
Using a secure proxy can simplify administration as a whole, by centralizing SSL key management to a single point. It allows you to use alternative PKI
infrastructures, like OpenSSL.
There is built-in HTTPS support for the Enterprise Console using a self-signed keystore file. The Enterprise Console supports either HTTP or HTTPS
based on your choice during fresh installation or upgrade. You can reconfigure the Enterprise Console to revert from HTTPS back to HTTP, or vice versa,
at any time.
For production use, AppDynamics strongly recommends that you replace the self-signed certificate with a certificate signed by a third-party Certificate
Authority (CA) or your own internal CA.
Replacing the entire keystore is not recommended unless you first export the existing artifacts from the default keystore and import them into
your own keystore.
It is also not recommended that you create your own self-signed certificate.
The exact steps to implement security typically vary depending on the security policies for the organization. For example, if your organization already has a
signed certificate to use, such as a wildcard certificate used for your organization's domain, you can import it into the keystore using the Enterprise
Console's update-certificate command. Otherwise, you can obtain a new one along with a certificate signing request.
Before Starting
On Linux machines, the Enterprise Console uses curl to check the responsiveness of the application URL. Therefore, SSL needs to have the latest NSS
package to work.
For example, you can update the NSS package to the latest with the following command on CentOS:
Your update procedure or NSS package name may differ depending on your Linux operating system.
If the checkbox is left unselected, then the installation will default to HTTP.
The Enterprise Console will create a self-signed certificate and use it for your HTTPS connection.
For Silent Installation, edit the platform admin response varfile with the following parameter:
This enables HTTPS connection and is the same as checking Enable Https Connection in the wizard installation option. Setting the
parameter to false uses HTTP.
2. Optional: The Enterprise Console will create a self-signed certificate and use it for HTTPS connection. You can replace it with a certificate signed
by a third-party CA. See Update to a Signed Certificate.
The Enterprise Console then configures the HTTPS protocol and disables HTTP in the Dropwizard configuration file, PlatformAdminApplication.yml. See
the Dropwizard Configuration Reference for more information.
server:
type: simple
connector:
type: encrypted-https # encrypted-https is a customized HTTPS connector type in Enterprise Console, with
keystore password encrypted.
port: 9191 # DO NOT REMOVE alternatives are 8080
keyStorePath: /appdynamics/platform-admin/conf/keys/keystore.jks
keyStorePassword: s_-001-12-v/yKyIweuGQ=iLpBEDTqfP7vj++WP+MKEg==
trustStorePath: /appdynamics/platform-admin/conf/keys/cacerts.jks
trustStorePassword: s_-001-12-hdLwJEOZbns=kDmS/pLvq2A43iCWLJEcTg==
certAlias: ec-server # DO NOT change cert alias name in keystore files.
validateCerts: false
supportedProtocols: [TLSv1.2]
bindHost: 0.0.0.0
applicationContextPath: /
The Enterprise Console encrypts all plain text passwords in the configuration file.
platformAdmin.useHttps$Boolean=true
When you upgrade your HTTPS supported Enterprise Console to future release versions, the Enterprise Console will follow the below protocols:
Upgrades with a self-signed certificate (the Enterprise Console installed certificate): The Enterprise Console will always recreate
the new keystore.jks, with a new self-signed certificate in it, and update the cacerts.jks file with the new self-signed certificate
under the <EC_installationDir>/conf/keys folder.
Upgrades with a signed certificate: The Enterprise Console will not modify your signed certificate, leaving it as it was before the
upgrade.
Note: Do not change the serverHostName in the admin response varfile from a private IP/hostname (if you used a private IP
/hostname as the serverHostName for a previous Enterprise Console fresh install or upgrade) to a public IP as the Enterprise
Console will not support the signed certificate afterwards. This restriction only applies to upgrades with a signed certificate.
Upgrades with customized keystore/truststore path and passwords: The Enterprise Console will back up the .jks files only if
they are under the <EC_installationDir>/conf/keys folder. The Enterprise Console will restore your keystore/truststore paths
and passwords in PlatformAdminApplicationl.yml before the upgrade, even if you move the .jks files or change the password.
Changing the .jks files location is not recommended as they will not be backed up by the Enterprise Console if they are in
another location.
1. Create your san.cnf file for the SAN. In the following example san.cnf file, multiple domain names and aliases are defined in [ alt_names
].
[ req ]
default_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no
[ req_distinguished_name ]
countryName = IN
stateOrProvinceName = Karnataka
localityName = Bangalore
organizationName = Appdynamics
commonName = ECserver
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = ECserver.com
DNS.2 = ECserver.secondary.com
DNS.3 = ECserver.alias1.com
DNS.4 = ECserver.alias2.com
IP.1 = 10.10.10.10
IP.2 = 10.10.10.9
2. Using the san.cnf file, generate the private key and CSR with the following openssl command:
openssl req -new -newkey rsa:2048 -nodes -out sslcert.csr -keyout private.key -config san.cnf
To keep your customized files backed up, place them under the <EC_installationDir>/conf/keys directory. Placing your files anywhere
besides <EC_installationDir>/conf/keys is not recommended because they may not be backed up.
If you change the keystore content for the Enterprise Console, you must re-run the change-keystore-password command and re-encrypt. Then, you
need to restart the Enterprise Console. See Controller Secure Credential Store for more information.
- For the keystore.jks file to work for the Enterprise Console, the storepass and keypass must be the same.
3. Change the ec-server keypass in keystore.jks by using the <plain_text_password> from Step 1:
4. Use the encrypted password to update the Enterprise Console Dropwizard confirmation yml file (PlatformAdminApplication.yml) for the
key "keyStorePassword".
5. Restart the Enterprise Console.
If you want to reuse the existing public key from the Java keystore to generate a CSR request, you must import the signed certificates manually.
See step 6 to 9 in Create a Certificate and Generate a CSR for more information.
Most Linux distributions include OpenSSL. If you are using Windows or your Linux distribution does not include OpenSSL, you may find more information
on the OpenSSL website.
//Some CAs will create everything for you, including the private key. You may use the following
keytool command to create a csr request from existing keystore.jks.
keytool -certreq -alias ec-server -keystore keystore.jks -file AppDynamics.csr
or
//You can also use the following openssl command to create your own private key and csr request.
openssl req -new -newkey rsa:2048 -nodes -out <name of csr request file>.csr -keyout <name of
private key>.key -subj "/C=<custom>/ST=<custom>/L=<custom>/O=<custom>/OU=<custom>/CN=<hostname>"
b. Submit the certificate signing request file generated by the command (AppDynamics.csr in our example command) to your Certificate
Authority of choice. When it's ready, the CA will return the signed certificate and any root and intermediate certificates required for the
trust chain. The response from the Certificate Authority should include any special instructions for importing the certificate, if needed. If
the CA supplies the certificate in text format, just copy and paste the text into a text file.
2. Run the Enterprise Console update certificate CLI command:
The privateKeyfile, sslCertFile, and ssl-chain files do not have any file format restrictions. Any file format, such
as .pkey and .txt, should work, as long as it is readable.
The privateKeyfile file content must follow the PKCS8 format.
sslCertFile is your SSL certificate file.
ssl-chain files are additional certificates, such as intermediate certificates. These are optional, and you may provide as
many of them as you would like.
This command updates the certificate in the keystore and truststore in the configuration yml file.
3. Restart the Enterprise Console for the new SSL configurations to take effect.
To make sure the configuration works, use a browser to connect to the Enterprise Console over the default secure port, port 9191:
https://<hosthame>:9191
Specify the hostname you used when you installed the Enterprise Console. The default port is 9191. This port needs to be exposed from your firewall rules
so you can access the port from any place. See Port Settings for more information.
Make sure the Enterprise Console entry page loads in the browser correctly.
Depending on your browser, you may have to perform additional steps to verify your connection. For instance, for self-signed certificates on
Chrome, you have to click to proceed. On Firefox, you have to create a security exception to proceed.
You can also verify that your configuration works by running commands on the Enterprise Console CLI.
Expired Certificate
In case of an expired certificate, the Enterprise Console CLI will still continue to work, but the CLI will also print out a warning, notifying you that the
certificate has expired.
Upgrades to the Enterprise Console remain unaffected by expired certificates; when you try to upgrade the Enterprise Console without knowing that the
certificate has expired, the upgrade should still succeed.
You can update the Enterprise Console self-signed certificate by reinstalling the application. For your own signed certificate, you can obtain a new one
from CA and run the CLI command from Update to a Signed Certificate.
platformAdmin.useHttps$Boolean=false
The Enterprise Console backs up any self-signed or signed certificate that is under <EC_installationDir>/conf/keys. However, if you manually
move the keystore.jks and truststore.jks to your own location, then you will need to back up your customized certificates and SSL related files on
your own before the upgrade.
The Controller comes with a preconfigured HTTPS port (port 8181 by default) that is secured by a self-signed certificate. This page describes how to
replace the default certificate with your own custom certificate.
You cannot modify, however, the password for the reporting-service.pfx file that is generated by the keystore artifact reporting-instance and
used by the Reporting Service.
The exact steps to implement security typically vary depending on the security policies for the organization. For example, if your organization already has a
certificate to use, such as a wildcard certificate used for your organization's domain, you can import the existing certificate into the Controller keystore.
Otherwise, you'll need to generate a new one along with a certificate signing request. The following sections take you through these scenarios.
Before Starting
The following instructions describe how to configure SSL using the Java keytool utility bundled with the Controller installation. You can find the keytool
utility in the following location:
<controller_home>/jre/bin
The steps assume that the keytool is in the operating system's path variable. To run the commands as shown, you first need to put the keytool utility in
your system's path. Use the method appropriate for your operating system to add the keytool to your path.
AppDynamics requires using a X.509 digital certificate, which works with any file type.
In these steps, you generate a new certificate within the Controller's active keystore, so it has immediate effect.
The steps are intended to be used in a staging environment, and require the Controller to be shut down and restarted. Alternatively, you can generate the
key as described here but in a temporary keystore rather than the Controller's active keystore. After the certificate is signed, you can import the key from
the temporary keystore to the Controller's keystore.
<controller_home>/appserver/glassfish/domains/domain1/config
2. Create a backup of the keystore file. For example, on Linux, you can run:
cp keystore.jks keystore.jks.backup
5. Create a new key pair in the keystore using the following command. This command creates a key pair with a validity of 1825 days (5 years).
Replace 1825 with the validity period appropriate for your environment, if desired.
keytool -genkeypair -alias s1as -keyalg RSA -keystore keystore.jks -keysize 2048 -validity 1825
For the first and last name, enter the domain name where the Controller is running, for example, controller.example.com.
Enter the default password for the key, changeit.
This generates a self-signed certificate in the keystore. We'll generate a signing request for the certificate next. You can now restart the Controller
and continue to use it. Since it still has a temporary self-signed certificate, browsers attempting to connect to the Controller UI will get a warning to
the effect that its certificate could not be verified.
See Change Keystore Password for information on changing the default password for the keystore and certificates.
6. Generate a certificate signing request for the certificate you created as follows:
7. Submit the certificate signing request file generated by the command (AppDynamics.csr in our example command) to your Certificate Authority
of choice.
When it's ready, the CA will return the signed certificate and any root and intermediary certificates required for the trust chain. The response from
the Certificate Authority should include any special instructions for importing the certificate if needed. If the CA supplies the certificate in text
format, just copy and paste the text into a text file.
8. Import the signed certificate:
9.
When done importing the certificate chain, try importing the signed certificate again.
Most Linux distributions include OpenSSL. If you are using Windows or your Linux distribution does not include OpenSSL, you may find more information
on the OpenSSL website.
The private key you use for the following steps must be in plain text format. Also, when performing the following procedures, do not attempt to associate a
password to the private key as you convert it to PKCS12 keystore form. If you do, the following steps can be completed as described, but you will
encounter an exception when starting up the Controller, with the error message: java.security.UnrecoverableKeyException: Cannot
recover key.
cd <controller_home>/appserver/glassfish/domains/domain1/config/
4. Create a backup of the keystore file. For example, on Linux, you can run:
cp keystore.jks keystore.jks.backup
7. Update the alias name on the key pair you just imported:
The alias name should be s1as. Do not change it from this name.
8.
keytool -keypasswd -keystore keystore.jks -alias s1as -keypass <.p12_file_password> -new <password>
For the new private key password, use the default (changeit) or the master password set as described in Change Keystore Password, if
changed.
9. If you get the error "Failed to establish chain from reply", install the issuing Certificate Authority's root and any intermediate certificates into the
keystore. The root CA chain establishes the validity of the CA signature on your certificate. Although most common root CA chains are included in
the cacerts.jks truststore, you may need to import additional root certificates. To do so:
https://<controller_host>:8181/controller
Make sure the Controller entry page loads in the browser correctly. Also, verify that the browser indicates a secure connection. Most browsers display a
lock icon next to the URL to indicate a secure connection.
After changing the certificate on the Controller, you will need to import the public key of the certificate to the agent truststore. For information on how to do
this, see the topic specific for the agent type:
If there is no proxy configured and the agent is reporting to the Controller itself, then the following changes are also mandatory:
platform-admin.sh stop-controller-appserver
On Windows, run this command from an elevated command prompt (which you can open by right-clicking on the Command Prompt icon in the
Windows Start menu and choosing Run as administrator):
You should also edit the domain.xml configurations on the Controller Settings page of the Enterprise Console to retain your settings.
See Update Platform Configurations for more information.
-Dappdynamics.controller.port=
-Dappdynamics.controller.services.port=
3. In the following property, change the protocol from HTTP to HTTPS, and change the port to the secure port.
-Dappdynamics.controller.ui.deeplink.url=
You can also use REST API to update the deeplink URL:
4. Add the following JVM argument anywhere above or below the above JVM arguments to ensure the internal agent connects using SSL.
-Dappdynamics.controller.ssl.enabled=true
platform-admin.sh start-controller-appserver
You can also use the modifyJVMOptions.sh script to make the changes.
Changing the password in this manner does not affect the administration password you use to access the Glassfish administration console. See
Update the Root User and Glassfish Admin Passwords for information on changing this password.
To change the password you must use the Glassfish administration tool (rather than the keytool utility directly). Using the Glassfish administration tool
allows the Glassfish instance to access the keys at runtime.
If you change the keystore password directly using the keytool, the Controller generates the following error message at start up:
If you encounter this scenario, change the password using the asadmin utility.
Changing the master password with asadmin changes the password for the domain-passwords, cacerts.jks, and keystore.jks stores (including
the s1as, reporting-instance, and glassfish-instance private keys in keystore.jks).
However if you customized any additional keys or existing key passwords, and they do not match the master password, when you change the master
password, the following error is generated:
This indicates that the store password for keystore.jks has been set to the master password, but one or more of the private keys still has a different
key password and do not match the master password. This prevents the Controller application from starting and generates the following error:
1. To resolve this issue, update each of the private key passwords in keystore.jks (s1as, reporting-instance, and glassfish-instance) to ensure
that they match the master password by entering the following keytool command:
Replace the <JRE_HOME>, <alias_name>, and <controller_home> variables with your information before executing the keytool
command.
Replace the <JRE_HOME>, <alias_name>, and <controller_home> variables with your information before executing the keytool
command.
If the key password matches the master password, the message "Passwords must differ*"* displays when entering the new key
password. This validates that the key password was set correctly.
3. Restart the Controller and ensure it starts without errors.
<controller_home>/appserver/glassfish/domains/domain1/config
cp keystore.jks keystore.jks.backup
4. Since you already have a Java keystore, run the following command to issue a certificate signing request. You should use this keystore for the csr
and not create a new one. You will be importing the new certificate into this keystore.
5. Submit the certificate signing request file Appdynamics.csr generated by the above example command to your Certificate Authority of choice.
a. When it's ready, the Certificate Authority will return the signed certificate and any root and intermediary certificates required for the trust
chain.
b. The response from the Certificate Authority should include instructions for importing the certificate if needed.
If the Certificate Authority supplies the certificate in text format, copy and paste the text into a text file.
6. You can list out the obtained certificate as follows if it is not in text format.
7. Import the signed certificate obtained into the keystore that you already have.
keytool -import -alias s1as -file <your obtained certificate> -keystore keystore.jks
a. The imported certificate will replace the old one, provided you use the same alias as the previous one.
b. Sometimes the root and intermediate certificates of the certification authority are also expired. If that's the case, you will see the
message Failed to establish chain from reply.
8. If the root and intermediate certificates of the certification authority are expired, they also have to be imported in your cacerts.jks so that the
chain of trust can be established. You can follow your certification authority’s instructions to download the root and intermediate certificates.
a. Keep the same alias as before for root and intermediate when you import these certificates into cacerts.jks.
keytool -import -alias <previous alias used for the certificate> -file
<path_to_root_or_intermediate_cert> -keystore <controller_home>/appserver/glassfish/domains/domain1
/config/cacerts.jks
Related Pages:
Agent-to-Controller Connections
This page describes the security protocol used by an on-premises Controller, and how you can modify it.
Java Agent version 3.8.1 or earlier (see Agent and Controller Compatibility for complete SSL compatibility information)
.NET Agent running on .NET Framework 4.5 or earlier
If upgrading the agents or .NET framework is not possible, you will need to enable TLSv1 and SSL3 on the Controller using the asadmin command-line
utility. To use the utility, you will need to supply the password configured for the root user for the Controller.
These changes require a restart of the Controller application server, which results in a brief service downtime. You may wish to apply these change when
the downtime will have the least impact.
To maintain a secure environment, APIs that are downstream of the Controller should also use TLS. If SSL3 is required, you can enable it. See the Oracle
JDK 8 documentation.
http(s)://<hostname>:<port>
You do not need to restart the Controller application server since the configuration change job automatically does so for you.
To enable stronger keys in encryption keys in the Controller, follow the instructions for the Controller version you are running.
After restarting the Controller app server, the following cipher suites become available:
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
You can set up SSH (Secure Shell) with public/private key pairs so that you do not have to type the password each time you access a Controller machine
by SSH. Setting up keys allows scripts and automation processes to access the Controller easily. You can generate DSA or if you want stronger
encryption, RSA keys.
% ssh-keygen -t dsa
2. At the following prompt, press Enter to accept the default key location, or type another:
If SSH continues to prompt you for your password, verify your permissions in your remote .ssh directory. It should have only your own read/write
/access permission (octal 700):
5. Open the local ~/.ssh/id_dsa.pub file and paste its contents into the ~/.ssh/authorized_keys file on the remote host.
6. Update the permissions on the authorized_keys file on the remote host as follows:
% ssh-keygen -t rsa
Otherwise, the remaining steps are identical to those beginning with step 2 in the steps above.
Related pages:
The Controller creates a secure credential keystore that holds a secret key used to encrypt credentials.
Stored Credentials
The secure credential store manages the following credentials:
Back up the credential store as part of your normal backup procedures for the Controller, as described in the following section.
If you run the Controller in high availability mode, both the primary Controller and the secondary Controller must use the same secure credential keystore
file. If you use an HA deployment strategy, verify that it propagates the secure credential keystore file from the primary to the secondary.
As detailed in the sections that follow, the steps are broken into these parts:
For example:
The secure credential store utility confirms it created and initialized the keystore:
For example:
The secure credential store utility writes out an obfuscated password for use in the Controller configuration. For example:
s_gsnwR6+LDch8JBf1RamiBoWfMvjjipkrtJMZXAYEkw8=
<controller_home>/bin/controller.sh login-db
4. Update the secure credential keystore password to the newly obfuscated password:
UPDATE global_configuration_cluster
SET value = '<obfuscated_secure_credential_keystore_password>'
WHERE name = 'scs.keystore.password';
<controller_home>/bin/controller.sh login-db
2. Update the account access key for the account to the plain text string. When the Controller starts, it will encrypt the account access key:
UPDATE account
SET access_key = '<plain_text_account_access_key>',
encryption_scheme = NULL
WHERE id = <account_id>;
You can get the account id by running the following query: select id account_id,name account_name,access_key,
encryption_scheme from account;
3. Only if you changed the plain text value of the account access key. Update the account access key for the agent users:
If you changed the plain text value of the account access key, you need to update the access key for all the agents.
The access key belongs to the "customer1" account in a single-tenant Controller and the "default" account in a multi-tenant Controller.
In addition, account_id is the account id of the "customer1" account in a single-tenant Controller and the "default" account in a multi-
tenant Controller.
4. If you have default license rules, update the account access key using v1_license_rules API.
For earlier Controller versions, you must use browser tools to migrate license rules.
<controller-dir>/appserver/glassfish/domains/domain1/appagent/ver4.X.X.X/conf/controller-info.xml
e. Stop appserver.
f. Start appserver.
If you use LDAP, DBmon, or HTTP Request Actions and Templates, then you must also reconfigure those components with the same
passwords to ensure that they are encrypted with new SCS key.
In addition to implementing Server Authentication, you can also implement mutual (client and server) authentication. Client authentication enables the
Controller to ensure that only authorized and verified agents can establish connections. These procedures outline the workflow to implement mutual
authentication.
Before Starting
Theses agents support client authentication:
Java Agent
Database Agent
Machine Agent
.NET Agent for Windows
It is good practice to set up and verify client authentication on one agent first. After you confirm that client authentication is working for that agent,
proceed with configuring additional agents.
If you have a "hybrid" environment, with Server Authentication only for some agents and Server and Client Authentication for others, you might
want to set up and configure multiple HTTP Listeners in Glassfish: one for Server Authentication only, and another for both Server and Client
Authentication.
The procedures described on this page use the default key and keystore password (changeit) for the keystore. Before you proceed with this
workflow, it is good practice to:
1. Change this default password, as described in "Change Keystore Password" under Controller SSL and Certificates.
2. Use the new password when you perform these procedures.
Instead of using plain text passwords in the procedures, you can specify encrypted or obscured passwords as described in Encrypt Agent
Credentials.
3. (Optional) To view information about the public and private keys in keystore.jks, enter the following command:
4. Import the Controller public key from server.cer into an agent keystore. The following command creates a new keystore (agent-
truststore.jks); the Controller sends the public key from this keystore to agents when they set up an SSL connection.
5. This command displays the certificate and asks if you trust it. Answer yes.
1. Copy the agent_truststore.jks file from the Controller to the root directory of the agent (<machine-agent-home> or <java-agent-
home> or <database-agent-home>).
2. For each authorized agent, specify the following properties in the <agent-home>/conf/controller-info.xml file as follows:
<controller-ssl-enabled>true</controller-ssl-enabled>
<controller-port>443</controller-port>
<controller-keystore-filename>agent_truststore.jks</controller-keystore-filename>
<controller-keystore-password>changeit</controller-keystore-password>
1. (Optional) To view information about the public and private key in the Controller keystore, enter:
2. Create a keystore (clientkeystore.jks) that includes the Controller public/private keypair. In <controller-keystore-home>, enter:
The keytool prompts you for your name, organization, and other information it needs to generate the key. AppDynamics App Agents use SunX509
as the default keystore factory algorithm. If keystores in your environment use something other than SunX509, you need to specify the algorithm
to the App agent. You can do so using the system property appdynamics.agent.ssl.keymanager.factory.algorithm. For example, to
set the algorithm to PKIX, add this to the startup command of the agent-monitored JVM:
-Dappdynamics.agent.ssl.keymanager.factory.algorithm=PKIX
3. Generate a certificate signing request (client.csr) that can be signed by a Certificate Authority (CA).
4. Get the request (client.csr) signed by a trusted CA. This command uses the Controller as a CA, which creates a new file (signedClient.
cer) with the Controller-signed certificate.
6. Verify that the certificate is signed by the authentic Certificate Authority. You can:
• Copy the public key of the signing authority into the trusted root set, or
• Import the public key of the signing authority into the client keystore.
This command does the latter by importing the Controller public key from server.cer into clientkeystore.jks.
This command asks if you trust the certificate; when you enter yes, this message should display:
The keystore should show entries for controller-alias and client-alias (which is still unsigned).
8. Import the signed public key certificate into the client keystore. This command imports signedClient.cer into clientkeystore.jks.
You now have a password-protected clientkeystore.jks file on the Controller with a signed certificate that verifies the Controller's
authenticity.
9. Verify that the trusted root certificate on the Controller includes the public key of the signing authority. This procedure used the Controller as the
Certificate Authority, so the public key is already included. To verify, enter:
The public key of the signing authority should now be part of the agent's public key certificate.
1. Copy the clientkeystore.jks file from the Controller to the following directory on the agent:
Database agent: <database agent home>/conf
Java agent: <java-agent-home>/conf
Machine Agent: <machine-agent-home>
2. Specify these properties in the <agent-home>/conf/controller-info.xml file as follows:
<use-ssl-client-auth>true</use-ssl-client-auth>
<asymmetric-keystore-filename>clientkeystore.jks</asymmetric-keystore-filename>
<asymmetric-keystore-password>changeit</asymmetric-keystore-password>
<asymmetric-key-password>changeit</asymmetric-key-password>
<asymmetric-key-alias>client-alias</asymmetric-key-alias>
set configs.config.server-config.network-config.protocols.protocol.http-listener-2.ssl.key-
store=keystore.jks
set configs.config.server-config.network-config.protocols.protocol.http-listener-2.ssl.client-auth-
enabled=true
set configs.config.server-config.network-config.protocols.protocol.http-listener-2.ssl.trust-
store=cacerts.jks
6. To verify that the properties are set correctly, run the following command. Again, this example assumes http-listener-2.
get configs.config.server-config.network-config.protocols.protocol.http-listener-2.*
The information and instructions below are intended for On-Premise deployments only. SaaS deployments are managed by AppDynamics.
Related pages:
Before Upgrading
Before you upgrade to a newer version of the platform, complete the following tasks:
Upgrade Order
Follow the instructions for each service in this order:
Related pages:
You can use the Enterprise Console to discover and upgrade existing AppDynamics components, such as a Controller or Events Service. After you
discover the components, they can be added to the platform and be managed by the application. This process can be performed with the GUI or command
line. You can discover Controllers that are version 4.1 or later.
Before you discover existing components, you need the following information:
Controller
Events Service
When the Enterprise Console discovers a component, it also checks to see if an upgrade is available and performs the upgrade.
Controller
Use the discover-upgrade-controller command to discover and upgrade a Controller:
If your upgrade fails, you can resume by passing the flag useCheckpoint=true as an argument after --args.
Events Service
The discover-events-service command can discover and upgrade an Events Service.
bin/platform-admin.sh discover-events-service --hosts <host 1> <host 2> <host 3> --installation-dir <Events
Service installation directory>
Instead of listing each host, you can specify a line-separated list in a text file:
bin/platform-admin.sh discover-events-service -n <file path for the host file> --installation-dir <Events
Service installation directory>