Platform Extensions
Platform Extensions
Platform Extensions
EDCSYCD160700-IGD-EN-02
OpenText™ Documentum™ Platform and Platform Extensions
Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
Rev.: 2019-Sept-25
This documentation has been created for software version 16.7.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Support: https://support.opentext.com
For more information, visit https://www.opentext.com
One or more patents may cover this product. For more information, please visit https://www.opentext.com/patents.
Disclaimer
Every effort has been made to ensure the accuracy of the features and techniques presented in this publication. However,
Open Text Corporation and its affiliates accept no responsibility and offer no warranty whether expressed or implied, for the
accuracy of this publication.
Table of Contents
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide iii
EDCSYCD160700-IGD-EN-02
Table of Contents
• reduce the high operating costs to develop, manage and maintain on-premises
applications
• avoid end user adoption issues caused by slow performance and lengthy
deployment timelines
• gain access to extensive resources to support EIM applications
• deploy EIM applications and grow as needed to scale to your business needs
With evidence that the cloud is the future for data, and that it is imminent that
enterprise workloads will run in the cloud, OpenText encourages you to choose the
cloud over an on-premise solution.
IMPORTANT
i Revision History
Revision Date Description
October 2019 Initial publication.
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide vii
EDCSYCD160700-IGD-EN-02
Chapter 1
Deploying Documentum Platform and Platform
Extensions applications on Docker environment
1.1 Introduction
You can deploy and configure Documentum Platform and Platform Extensions
applications on the supported Docker containers.
Note: Docker images for Documentum Server with the following configuration
are provided: Ubuntu/PostgreSQL and CentOS/PostgreSQL.
uname -r
• (Only for CentOS) Check the file system type. If the Docker mount point is an xfs
file system, then set d_type to true. Docker Documentation contains more
information.
• For Docker Documentum Server image, the location of INSTALL_HOME is /opt for
CentOS and Ubuntu images. Ensure that in the Docker Compose file, you must
use /opt as your Documentum home directory.
• For stateless configuration on CIFS, you must create the volumes manually and
then use the GID and UID identifiers for netshare plugin. Docker Documentation
contains the workaround details.
• UID should be synchronized or same when deploying Documentum Server
between different host systems for seamless upgrade or sharing of data between
two systems.
• If you use netshare plugin for remote data and for any reason if the service is
restarted, you must run the following command:
You can use the configured YML file names in your up command to start the
container.
For example, Documentum provides the following compose YML files:
– CS-Docker-Compose_Stateless.yml
– CS-Docker-Compose_Ha.yml
– CS-Docker-Compose_Seamless.yml
• If the image is a TAR file, then load the image into the local registry using the
following command and update the Documentum Server image name:
Item Requirement
Operating system Red Hat Enterprise Linux 6.7 or 7.0 (64-bit)
Free disk space 80 GB
RAM 8 GB
Swap space 8 GB
Free space in temporary directory 2 GB
Item Requirement
Operating system Red Hat Enterprise Linux 7.0 (64-bit) or
higher
Database Oracle server 12c
Note: Oracle server can be on any supported operating system. However, for
illustrative purposes, all information in this document are provided
considering Oracle server is installed on a Linux platform. Oracle
Documentation contains more details on hardware and software requirements
on this machine.
The Release Notes document contains the software requirements information for your
product.
2. Create a container to install minimum RPMs and Oracle client. For example:
3. Copy all the packages or RPMs to the Docker container to install all the required
RPMs. First, copy the packages from CD/ISO to your host Docker Machine.
After that, copy the packages folder from the Docker Machine to the Docker
container. For example:
4. Log in to the Docker container and install the createrepo package. For
example:
$ createrepo -v /Packages
$ vi /etc/yum.repos.d/rhel7.repo
[rhel7]
name=RHEL 7
baseurl=file:///Packages
gpgcheck=0
enabled=1
9. Install all the required RPMs. For example, run the following commands in the
given sequence:
Note: Ensure that you also install all the dependent RPMs.
10. To ensure that dmdbtest does not fail during the loading of the shared libraries
of libsasl2.so.2, while configuring the repository, perform the following steps in
RHEL Docker container:
root:~ # cd /usr/lib64
root :/usr/lib64# ln -s libsasl2.so.3 libsasl2.so.2
11. After installing all the required RPM Package Managers (RPMs), remove the /
Packages folder in the container.
Install the Oracle client on the Docker container to connect the Oracle database
which is outside of the container (in a different machine).
a. Log in to the Docker container with root account and create groups. For
example:
$groupadd oinstall
$groupadd dba
b. Create an Oracle user with the initial login group of oinstall, secondary
to dba. For example:
$passwd oracle
2. Create directories for Oracle client in the Docker container. For example:
$mkdir -p /u01/app/oracle
3. Change the ownership and permissions for the directories. For example:
4. Copy the Oracle client installer from Docker to the Docker container. For
example:
5. Log in to the Docker container with root account and change the permissions
for the ora_client folder. For example:
6. Set the DISPLAY environment variable to install the Oracle client in GUI mode.
For example:
$export DISPLAY=10.30.87.106:0.0
$./runInstaller
8. After the installation, set the Oracle and path environment variables. For
example:
$export ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_1
$export PATH=$ORACLE_HOME/bin:$PATH
$export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
Run netca to configure the Local Net Service Name. Also, provide the Oracle
service name and database hostname details. Use 10.31.71.131 as Oracle
hostname in the $TNS_ADMIN/tnsnames.ora. The value (10.31.71.131) is
automatically replaced by Documentum Docker scripts with the original Oracle
hostname in the configuration file.
9. After configuring the Oracle client, remove the client installer /u01/app/
ora_client and save the changes to the image. For example:
Export the following environment variables on the shell manually before running
the following Docker Compose command while creating or recreating the container:
docker-compose -f <compose file name> up -d
Note: Repository passwords must not contain special characters other than the
following characters:. (period),_ (underscore),- (hyphen)
$uname -r
b. Install the Docker engine.
c. Start the Docker daemon service.
2. Download and copy the Documentum Server Docker image zip file to the
Docker base machine. The Docker image contains the PostgreSQL client,
Documentum Server, two connection brokers, and a repository in a single
container.
3. Extract the zip file. Load the Documentum Server Docker image into the Docker
base machine. For example:
4. Verify that the Documentum Server Docker image is loaded and listed in the
Docker base machine. For example:
$ docker images
9. To support random number generator and to avoid issues while renaming the
log files, run the following command with the root account in the container:
3. Start the Docker netshare plugin if data or share are in external file system. For
example:
./docker-volume-netshare --basedir=/var/lib/docker/volumes --
verbose=true nfs
2. Install the Docker Engine and Docker Compose file on your host machine.
Docker Documentation contains more information.
4. Extract the packaged TAR file using the following command format:
6. Run the Docker images command to verify if the Docker image is loaded
successfully and listed.
5. Run the Docker ps command to view all the active Docker containers.
6. To verify the deployment, access the URL of IJMS container. For example,
http://<Docker_host>:<JMS_PORT>/DmMethods/servlet/DoMethod.
volumes:
<container_name>_log:
<container_name>_mdserver_conf:
If you have given an unique port number for JMS_PORT as 9280 in the Example 1-2,
“statelessda_compose.yml to create container and volumes” on page 23 example,
then it is resolved as 9280:9180 meaning that the container is running with port
number 9180 is externally exposed as 9280. You can use Docker-compose.yml as a
template to expose the port, as needed.
3. Provide all the required details in the dfc_conf.conf file. Read the description
of every field and provide valid values for each parameter.
Note: Ensure that the docker daemon is running. If it is not running, run
the dockerd command.
5. Run the docker images command to verify if the Docker image is loaded
successfully and listed.
5. Run the docker ps command to view all the active Docker containers.
2. Provide all the required details in the bocs_compose.yml file. Read the
description of every field and provide valid values for each parameter.
docker-compose -f bocs_compose.yml up -d
Note: Ignore the Protocol family unavailable error in the install.log file
located at /opt/bocs-docker/logs.
3. Provide all the required details in the dfs_conf.conf file. Read the description
of every field and provide valid values for each parameter.
Note: The system must have 64-bit architecture and have the supported
version of kernel.
Start the Docker engine service if it is not running.
3. Run the container and modify the ports as per your requirement.
This command mounts three directories on the host machine to the new created
Docker container.
Otherwise, the IP addresses of the Docker-engine host should be set in the fields.
• Verify the deployment. “Verifying the deployment” on page 29 section
describes the steps to verify the deployment.
Access the directory that contains the CIS binaries, and execute the cisSetup.bin
with silent.ini:
docker$ cd /home/cisInstaller
docker$ ./cisSetup.bin -f silent.ini
2. Start and stop the services.
The services automatically starts after the installation is completed.
To stop the CIS service manually, execute the following script file:
docker$ cd $DOCUMENTUM/CIS/service
docker$ ./stopCIS
To start CIS services manually, run the following script file in the same
directory:
docker$ ./startCIS
INSTALLER_UI=silent
CIS.SKIP_DEPLOYING_DAR=false
CIS_SECTION.CIS_REPOSITORY_NAME=sampleRepository
CIS_SECTION.CIS_REPOSITORY_USER=Administrator
CIS_SECTION.SECURE.CIS_REPOSITORY_PASSWORD=password
CIS_SECTION.CIS_REPOSITORY_DOMAIN=
CIS_SECTION.CIS_HOST=cis
CIS_SECTION.CIS_PORT=8079
CIS_SECTION.CIS_JMX_AGENT_PORT=8061
CIS_SECTION.LUXID_PORT=55550
DFC.DOCBROKER_HOST=192.168.10.20
DFC.DOCBROKER_PORT=1489
DFC.DFC_BOF_GLOBAL_REGISTRY_REPOSITORY=sampleRepository
DFC.DFC_BOF_GLOBAL_REGISTRY_USERNAME=dm_bof_registry
DFC.SECURE.DFC_BOF_GLOBAL_REGISTRY_PASSWORD=password
USE_CERTIFICATES=false
DFC_SSL_TRUSTSTORE=
DFC_SSL_TRUSTSTORE_PASSWORD=
DFC_SSL_USE_EXISTING_TRUSTSTORE=false
2. Navigate to Cabinets > System > Modules > Aspect and check that the module
cis_annotation_aspect is present.
2. Navigate to the Content Intelligence node and verify that the following sections
are present:
• Taxonomies
• Category Class
• Document Set
• My Categories
3. Navigate to Cabinets > System > Applications > CI and verify that the
following folders are present:
• AttributeProcessing
• Classes
• Configuration
• DocsetConfiguration
• DocumentSets
• MetadataExtrationRules
• Runs
• TaxonomySnapshots
• XMLTaxonomies
On Windows hosts, the CIS services are installed in the automatic startup mode. You
can make sure that all services are started correctly, and, if not, start them manually
or reboot to start them automatically.
1. Select My Computer > Manage > Services and Applications > Services.
3. For the entity detection analysis, make sure that the following services are
started:
If you want to start them manually, start the Documentum CIS Luxid Starter
service first. This service starts the other services in the correct order.
dfc.search.external_sources.enable=true
dfc.search.external_sources.host=172.17.0.3
dfc.search.external_sources.port=3005
2. The configuration files of FS2 can be externalized outside the container on the
host machine.
To do this, use “-v” option at run time.
For example:
-v /home/fs2Storage/wrapper:/root/dctm/fs2/lib/wrapper \
--name fs2 ubuntu
In the example, the five directories containing the configuration files are stored
in the /home/fs2Storage/ folder.
Access the directory that contains the FS2 binaries, and execute fs2Setup.bin with
silent.ini:
docker$ cd /home/fs2Installer
docker$ ./fs2Setup.bin -f silent.ini
docker$ cd $DOCUMENTUM/fs2/bin
docker$ ./aOServer -nogui
docker$ ./aOAdmin
4. To verify the deployment, access the URL of REST services container. For
example, http://localhost:8080/dctm-rest/services.
CONFIGURE_THUMBNAIL_SERVER = YES
THUMBNAIL_SERVER_PORT = 8081
THUMBNAIL_SERVER_SSL_PORT = 8443
2.1 Overview
Kubernetes is a portable, extensible open-source platform orchestration engine for
automating deployment, scaling, and management of containerized applications.
Kubernetes can be considered as a container platform, microservices platform, and
portable cloud platform. It provides a container-centric management environment.
You can use the containerized deployment using the Docker images and Helm
Charts that are packaged with the release.
1. Download and configure the Docker application from the Docker website.
Docker Documentation contains more information.
2. Download and configure the Helm (client) and Tiller (server) application from
the Helm website.
For Tiller and Role-based Access Control:
a. Load the image into the local Docker registry using the following command
format:
6. Download the sample values.yaml file for PostgreSQL from the GitHub
website. Open and provide the appropriate values for all the required variables.
7. Download the Documentum Server Image (CentOS only) and Helm Chart TAR
files from OpenText My Support.
4. Load the Graylog Docker image using the following command format:
Upload the Graylog Docker image to your local repository and configure, as
appropriate.
5. Open the cs-secrets/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates as
described in the following table:
6. Store the secret values in your Kubernetes environment using the following
command format:
For example:
7. Verify the status of the stored secret values file using the following command
format:
8. Open the docbroker/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates as
described in the following table:
9. Deploy the connection broker Helm in your Kubernetes environment using the
following command format:
For example:
10. Verify the status of the connection broker Helm deployment using the following
command format:
11. Verify the status of the deployment of connection broker pod using the
following command format:
13. Deploy the Documentum Server Helm in your Kubernetes environment using
the following command format:
For example:
14. Verify the status of the deployment of Documentum Server Helm using the
following command format:
15. Verify the status of the deployment of Documentum Server pod using the
following command format:
17. Deploy the Documentum Server DFC properties Helm in your Kubernetes
environment using the following command format:
For example:
18. Verify the status of the deployment of Documentum Server DFC properties
Helm using the following command format:
Notes
3. Run the following command inside all Documentum Server 16.4 pods to extract
the AEK from lockbox:
/opt/dctm/product/16.4/bin/dm_crypto_create -lockbox
lockbox.lb -lockboxpassphrase Password@123 -keyname aek_name -
removelockbox -output aek_name
a. Run the following command inside all Documentum Server 16.4 pods to
extract the AEK from lockbox:
/opt/dctm/product/16.4/bin/dm_crypto_create -lockbox
lockbox.lb -lockboxpassphrase Password@123 -keyname aek_name
-removelockbox -output aek_name
b. Set the value of majorUpgrade to true in content-server/values.yaml.
c. Update new image details in content-server/values.yaml for the
Documentum Server Helm Chart.
d. Upgrade using the following command format:
Notes
• If you encounter any problems during the upgrade process with the
new image, then the upgrade process stops automatically. Also,
you can roll back to the previous image. “Rolling back the upgrade
process” on page 59 contains the instructions.
5. Verify the status of the successful upgrade using the following steps:
a. Check the installation log files. If the upgrade is successful, no errors are
reported in the log files.
b. Check the Documentum Server version. Log in to the pod and run the IAPI
command to verify the version. If the upgrade is successful, then the new
Documentum Server version is displayed.
c. Check the status of the pod. If the upgrade is successful, the status of the
pod is active.
Notes
Notes
• Upgrade of both image and replicas are supported. You must update
the image-related and replica values in content-server/values.yaml
only. All other values must not be changed.
• Upgrade process is in descending order. The upgrade process starts
from the second Documentum Server (for example,
documentumserver2) followed by the first Documentum Server (for
example, documentumserver1).
• If you encounter any problems during the upgrade process with the
new image, then the upgrade process stops automatically. Also, you can
roll back to the previous image. “Rolling back the upgrade process”
on page 59 contains the instructions.
• While upgrading the Documentum Server pod, the existing
Documentum Server pod is deleted and new Documentum Server pod
is created. Volume Claim Templates and Persistent Volume Claims
remain as is and the new pods continue to mount the old VCTs and
PVCs.
• Upgrade process takes approximately five minutes for each pod.
4. Verify the status of the successful upgrade using the following steps:
a. Check the installation log files. If the upgrade is successful, no errors are
reported in the log files.
b. Check the Documentum Server version. Log in to the pod and run the IAPI
command to verify the version. If the upgrade is successful, then the new
Documentum Server version is displayed.
c. Check the status of the pod. If the upgrade is successful, the status of the
pod is active.
2. Roll back to the previous image using the following command format:
2.3.4 Limitations
• Installation owner is predefined and cannot be changed. The value is dmadmin.
• Installation path is predefined and cannot be changed. The value is /opt/dctm/
product/<product version>.
2.3.5 Troubleshooting
Symptom Cause Fix
When you check the status of One of the two containers in Delete the pod using the
available pods using the the specified pod(s) is down kubectl delete pods
kubectl get pods or unavailable. <name of the pod>
command, the READY value command. The pod is
of one or more pod(s) reads recreated automatically.
as 1/2.
Or
kubectl describe
statefulset <name of
the statefulset>
2.4.1 Prerequisites
1. Perform the steps from Step 1 to Step 4 in “Deploying and configuring
Documentum Server on private cloud” on page 37.
3. Download the IJMS Image (CentOS only) and Helm Chart TAR files from
OpenText My Support.
3. Update the ijms/values.yaml file and provide the appropriate values for the
variables depending on your environment to pass them to your templates as
described in the following table:
4. Deploy the IJMS Helm in your Kubernetes environment using the following
command format:
For example:
5. Verify the status of the deployment of IJMS Helm using the following command
format:
6. Verify the status of the deployment of IJMS pod using the following command
format:
2.4.3 Limitations
• Installer updates the IJMS config objects only in the primary Documentum
Server. You must manually update the IJMS config objects in all other
Documentum Server (replicas) config objects.
2.4.4 Troubleshooting
There are no troubleshooting information for this release.
3. Download the Graylog Docker Image from the Docker Hub website. Graylog
Docker Documentation contains more information about Administrator
configuration.
4. Load the Graylog Docker image using the following command format:
Upload the Graylog Docker image to your local repository and configure, as
appropriate.
5. Open the da/values.yaml file and provide the appropriate values for the
variables depending on your environment to pass them to your templates as
described in the following table:
Note: OpenText
recommends 2 replica
pods.
buildNo buildNo Specifies the build number
of the current deployment.
appName appName Specifies the application
name for Documentum
Administrator. The format
is da-app-<build_no>.
images da • repository: Specifies
the path of the
repository. The format is
<IP
Address>:<Port>.
• name: Specifies the
name of the
Documentum
Administrator image.
For example, /da/
centos/stateless/
dastateless.
• tag: Specifies the tag as
a version-specific
number.
• pullPolicy: Specifies
to pull the image. For
example,
IfNotPresent.
For example:
2.5.3 Limitations
Installation path is predefined and cannot be changed. The value is /opt/tomcat/
webapps/da.
2.5.4 Troubleshooting
Symptom Cause Fix
When you check the status of One of the two containers in Delete the pod using the
available pods using the the specified pod(s) is down following command:
kubectl get pods or unavailable.
command, the READY value kubectl delete pods
of one or more pod(s) reads <name of the pod
as 1/2. command>.
3. Update the dfs/values.yaml file and provide the appropriate values for the
variables depending on your environment to pass them to your templates as
described in the following table:
For example:
2.6.3 Limitations
Installation path is predefined and cannot be changed. The value is /opt/tomcat/
webapps/dfs.
2.6.4 Troubleshooting
Symptom Cause Fix
When you check the status of Incorrect Helm deployment. Delete the Helm deployment
available deployed image, it using the following
results in the Error: command:
ImagePullBackOff error.
helm delete <release
name> --purge <name
of the pod> --tiller-
namespace <name of
tiller namespace> --
namespace <name of
namespace>
The following items form a checklist that you can use to prepare to deploy
Documentum REST Services in the Kubernetes environment:
2. Extract the contents of the TAR file using the following command format:
3. Load the REST image, tag it, and load it to the registry.
8. Edit the values.yaml file and provide the appropriate values for the variables
depending on your environment to pass them to your templates as described in
the following table:
10. Use the helm or kubectl command to verify the status of REST Helm
deployment.
You can only modify the values.yaml or set the variables in command.
...
#ssl
ssl:
keystoreFile: /root/rest/persistence/foobar/ks.jks
keystorePwd: passw0rd
keyAlias: tomcat
keyPwd: passw0rd
keystoreType: JKS
...
#volume
securityVolumeMountPath: /root/rest/persistence
...
#persistence
persistence:
enabled: true
subPath: security
...
graylog:
enabled:true
image: gcr.io/documentum-search-product/graylog-sidecar
imagePullPolicy: Always
server: rest-graylog-headless.dctm-rest.svc.cluster.local
port: 9000
serviceToken: 87ckh5e9aammi6rd6g75ceuibce4ot8icb3itpeq4bibea25ge0
logsDir: /root/rest/logs
filebeat.inputs:
- input_type: log
paths:
- /pod-data*.log
type: log
output.logstash:
hosts: ["rest-graylog-headless.dctm-rest.svc.cluster.local:5044"]
path:
data: /var/lib/graylog-sidecar/collectors/filebeat/data
logs: /var/lib/graylog-sidecar/collectors/filebeat/log
You can roll back the upgrade using the following command format:
2.7.5 Extensibility
Documentum REST Services supports extensibility for you to customize the
resources.
2.7.6 Limitations
There are no limitations for this release.
2.7.7 Troubleshooting
There are no troubleshooting information for this release.
1. Download and configure the Docker application from the Docker website.
Docker Documentation contains more information.
2. Download and configure the Helm (client) and Tiller (server) application from
the Helm website.
Helm Documentation contains more information.
3. Download and configure the Kubernetes application from the Kubernetes
website.
Kubernetes Documentation contains more information.
4. Download and configure the PostgreSQL database (server) from the
PostgreSQL website.
Click Create.
b. Create a container registry. Container registry is used to store the Docker
images in Azure. A standard container registry can store up to 100 GB of
images.
Navigate to Home > Container registries > Create container registry and
provide the following information:
• Registry name
• Subscription
• Resource group
• Location
• Admin user
• SKU
Click Create.
c. Create an Azure Kubernetes Service (AKS). AKS is a managed container
orchestration service, based on the open source Kubernetes system, which
is available on the Azure public cloud.
Navigate to Home > Kubernetes services > Create Kubernetes cluster and
provide the information in the following tabs:
• Basics: Provide valid values for all the mandatory fields such as
PROJECT DETAILS, CLUSTER DETAILS, and so on. Select the
E4s_V3 with the family as Memory Optimized for the virtual machine
size.
Click Next: Authentication >.
• Authentication: Provide valid values for all the mandatory fields such
as CLUSTER INFRASTRUCTURE and KUBERNETES
AUTHENTICATION AND AUTHORIZATION. Enable Role-based
access control (RBAC).
Click Next: Networking >.
• Networking: Disable HTTP application routing and set the proper
ingress controller. Also, select Basic for Network configuration.
Click Next: Monitoring >.
• Monitoring: Enable the container monitoring. Also, select the log
analytics workspace.
Click Next: Tags >.
az aks install-cli
sudo yum update azure-cli
sudo sh -c 'echo -e "[azure-cli]\nname=Azure
CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-
azurecli\nenabled=1\ngpgcheck=1\
ngpgkey=https://packages.microsoft.com/keys/microsoft.asc">
/etc/yum.repos.d/azure-cli.repo'
sudo yum install azure-cli
az aks install-cli
az aks get-credentials --resource-group dctm --name dctmaks
az login
az login -u <id>@opentext.com
After the configuration, view the configuration of Azure using the following
command format:
Example output:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 0506b0a1bd38280fc155a15ae2641eb8
Example output:
linux-386/
linux-386/README.md
linux-386/tiller
linux-386/helm
linux-386/LICENSE
[root@skvcentos ~]#
c. Find the Helm binary and move it to the /usr/local/bin/helm folder
using the following command format:
mv linux-amd64/helm /usr/local/bin/helm
Example output:
9. Create a cluster role binding for a particular cluster role using the following
command format:
Example output:
tiller-cluster-rule created
[root@skvcentos linux-386]#
10. Install Tiller (server) in your Kubernetes Cluster and set up the local
configuration in $HELM_HOME (default is ~/.helm/). Use the following command
to read $KUBECONFIG (default is ~/.kube/config) and identify the Kubernetes
clusters:
helm init
Initialize the Tiller in your namespace using the following command format:
helm init --service-account <name of the service account> --
tiller-namespace <name of namespace>
Example output:
[root@skvcentos ~]# helm init --service-account tiller
--tillernamespace default $HELM_HOME has been configured at /
root/.helm.
Example output:
[root@skvcentos ~]# kubectl patch deploy --namespace default
tillerdeploy -p
'{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment.extensions/tiller-deploy patched (no change)
12. Fetch the details of the resource using the following command format:
az aks show --resource-group <resource group name>
--name <name of the cluster>
--querynodeResourceGroup -o tsv
Example output:
[root@skvcentos ~]# az aks show --resource-group dctm
--name dctmaks
Example output:
14. Create a storage class, a YAML file (for example, azstorageclass.yaml with
appropriate values for the parameters), and apply the configuration.
A storage class is used to define how an Azure file share is created. A storage
account can be specified in the class.
Different types of storage are:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azurefile
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777 (refers to permission mode)
- file_mode=0777
- uid=1000 (refers to the user id of the Documentum
installation owner)
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: <name of the created storage account>
b. Apply the configuration to a resource using the name of the YAML file
using the following command format:
kubectl apply -f <name of the YAML file>.yaml
Example output:
[root@skvcentos ~]# kubectl apply -f azstorageclass.yaml
storageclass.storage.k8s.io/azurefile created
Resource is created if it does not exist yet. Ensure that you specify the
resource name.
15. Create the cluster role, cluster binding, a YAML file (for example, azure-pvc-
roles.yaml with appropriate values for the parameters), and apply the
configuration.
AKS clusters use Kubernetes RBAC to limit actions that can be performed. Roles
define the permissions to grant and bindings apply them to the desired users.
Example output:
Resource is created if it does not exist yet. Ensure that you specify the
resource name.
16. Create a persistent volume claim, a YAML file (for example, azure-pvc-roles.
yaml with appropriate values for the parameters), and apply the configuration.
A persistent volume claim (PVC) uses the storage class object to dynamically
provision an Azure file share.
Create a persistent volume claim using the following command format:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-volume-claim (refers to volume claim)
spec:
accessModes:
- ReadWriteMany
storageClassName: azurefile
resources:
requests:
storage: 100Gi
Example output:
17. Create new role assignment for a user, group, or service principal using the
following command format:
Example output:
Example output:
docker pull
<10.8.176.180:5000/contentserver/centos/stateless/
cs:16.4.0100.0150>
Example output:
Example output:
Example output:
Example output:
19. Download the Documentum Server Image (CentOS only) and Helm Chart TAR
files from OpenText My Support.
3. Open the cs-secrets/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates.
For example:
5. Verify the status of the stored secret values file using the following command
format:
6. Open the db/values.yaml file and provide the appropriate values for the
variables depending on your environment to pass them to your templates.
For example:
8. Verify the status of the database Helm deployment using the following
command format:
9. Verify the status of the deployment of database pod using the following
command format:
10. Open the docbroker/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates.
11. Deploy the connection broker Helm using the following command format:
For example:
12. Verify the status of the connection broker Helm deployment using the following
command format:
13. Verify the status of the deployment of connection broker pod using the
following command format:
15. Deploy the Documentum Server Helm in your Kubernetes environment using
the following command format:
For example:
16. Verify the status of the deployment of Documentum Server Helm using the
following command format:
17. Verify the status of the deployment of Documentum Server pod using the
following command format:
19. Deploy the Documentum Server DFC properties Helm in your Kubernetes
environment using the following command format:
For example:
20. Verify the status of the deployment of Documentum Server DFC properties
Helm using the following command format:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-d2
annotations:
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: d2-aks-ingress.eastus.cloudapp.azure.com
<dns name generated with external IP address>
http:
paths:
- backend:
serviceName: d2csd2config
servicePort: 8080
path: /D2-Config
- backend:
serviceName: d2csd2client
servicePort: 8080
path: /D2
2. Configure an FQDN for the public IP address of your ingress controller. Map
the external IP address to the DNS name using the Bash script. Use the
following command format:
#!/bin/bash
3.2.4 Limitations
• Host name must have the fully qualified domain name (FQDN) and must not be
greater than 59.
• You must change the storage class as per the Azure Kubernetes service offering.
The default storage class provisions a standard Azure disk while the managed-
premium storage class provisions a premium Azure disk.
• To use Postgres as service, you must create the database used for the installing
the repository. Update the fields as follows:
docbase:
name: <docbase owner>
id: <docbase id>
existing: true
index:
<respective index value>
• Only HTTP configuration is supported for jmsProtocol and tnsProtocol.
3.2.5 Troubleshooting
3.3.3 Limitations
There are no limitations for this release.
3.3.4 Troubleshooting
There are no troubleshooting information for this release.
3.4.4 Limitations
There are no limitations for this release.
3.4.5 Troubleshooting
There are no troubleshooting information for this release.
100 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
Chapter 4
Deploying Documentum Platform and Platform
Extensions applications on Google Cloud Platform
1. Download and configure the Docker application from the Docker website.
Docker Documentation contains detailed information.
2. Download and configure the Helm (client) and Tiller (server) application from
the Helm website in the cluster namespace (your Cloud Shell machine).
Helm Documentation contains detailed information.
3. Download and configure the Kubernetes application from the Kubernetes
website.
Kubernetes Documentation contains detailed information.
4. Download and configure the PostgreSQL database (server) from the
PostgreSQL website.
a. From the GCP website, select the GCP project linked with your corporate
billing account.
b. Create a cluster. Navigate to Kubernetes Engine > Clusters, click CREATE
CLUSTER, and perform the following:
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 101
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
Click Create.
c. Click Connect next to the cluster you created and then click Run in Cloud
Shell.
A Google Cloud shell (a VM created by Google with pre-installed Kubectl
and gcloud SDK) is created.
d. Press Enter at the command that shows up.
Cluster credentials are fetched and creates a kubeconfig entry (the
kubectl configuration file).
e. Create a Kubernetes cluster namespace on the Cloud shell using the
following command format:
[HOSTNAME]/[PROJECT-ID]/[IMAGE]
For example:
gcr.io/documentum-d2-product/contentserver/
centos/stateless/cs:16.4.0120
c. Load the tagged image to GCR. For example:
8. Create Role-based access control (RBAC) configurations for Helm and Tiller.
a. Create a service account for Tiller in the namespace using the following
command format:
102 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.2. Deploying and configuring Documentum Server on Google Cloud Platform
Notes
Example output:
Client: &{SemVer:"v2.11.0",
GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b",
GitTreeState:"clean"}
Server:
&{SemVer:"v2.11.0",
GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b",
GitTreeState:"clean"}
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 103
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
• Provide a name to the Google Cloud Filestore instance. For example, demo-
gcfss.
• Select a standard cluster template.
• Select the default authorized network.
• Select your region and zone for the Location type for better performance.
• Select the NFS mount point for the Fileshare name. For example, demogcfs.
• Provide a minimum of 1 TB for Fileshare Capacity.
Google Cloud Filestore need not be created every time. The Google Cloud
Filestore instance can be shared by multiple clusters.
Once the Google Cloud Filestore instance is created, click the instance and note
the IP address and path of the instance.
Note: The instance tier of the Google Cloud Filestore instance cannot be
modified once it is created. However, the Fileshare Capacity can be
modified.
RESOURCES:
==> v1/Deployment
NAME AGE
demo-nfs-client-provisioner 1s
104 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.2. Deploying and configuring Documentum Server on Google Cloud Platform
==> v1/Pod(related)
NAME READY...
demo-nfs-client-provisioner-76986844 0/1...
==> v1/StorageClass
NAME AGE
gcp-rwx 1s
==> v1/ServiceAccount
demo-nfs-client-provisioner 1s
==> v1/ClusterRole
demo-nfs-client-provisioner-runner 1s
==> v1/ClusterRoleBinding
run-demo-nfs-client-provisioner 1s
==> v1/Role
leader-locking-demo-nfs-client-provisioner 1s
==> v1/RoleBinding
leader-locking-demo-nfs-client-provisioner 1s
11. Create a sample PVC (for example, testPVC.yaml) to test the dynamic
provisioning of ReadWriteMany (RWX) PVs using the following command
format:
kind: PersistentVolumeClaim
apiversion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: gcp-rwx
resources:
requests:
storage: 1Mi
12. Verify if the corresponding Persistent Volume is created using the following
command format:
$ kubectl get pv
14. Download the Documentum Server Image (CentOS only) and Helm Chart TAR
files from OpenText My Support.
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 105
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
3. Update the image and tag fields in the values.yaml files of your Helm Charts
to point to the GCR images. For example:
images:
repository: gcr.io
contentserver:
name: <name of Documentum product>/contentserver/centos/
stateless/cs
tag: <build number/version of Documentum product>
persistentVolume:
csdataPVCName: documentum-data-pvc
pvcAccessModes: ReadWriteMany
size: 3 Gi
volumeClaimTemplate:
vctName: documentum-vct
vctAccessModes: ReadWriteOnce
size: 1 Gi
storageclass: standard
5. Open the cs-secrets/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates.
For example:
7. Verify the status of the stored secret values file using the following command
format:
106 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.2. Deploying and configuring Documentum Server on Google Cloud Platform
8. Open the db/values.yaml file and provide the appropriate values for the
variables depending on your environment to pass them to your templates.
For example:
10. Verify the status of the database Helm deployment using the following
command format:
11. Verify the status of the deployment of database pod using the following
command format:
12. Open the docbroker/values.yaml file and provide the appropriate values for
the variables depending on your environment to pass them to your templates.
13. Deploy the connection broker Helm using the following command format:
For example:
14. Verify the status of the connection broker Helm deployment using the following
command format:
15. Verify the status of the deployment of connection broker pod using the
following command format:
17. Deploy the Documentum Server Helm in your Kubernetes environment using
the following command format:
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 107
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
For example:
18. Verify the status of the deployment of Documentum Server Helm using the
following command format:
19. Verify the status of the deployment of Documentum Server pod using the
following command format:
20. Download the NGINX (Ingress Controller) Helm from the Github website.
Ingress consists of two components:
• Ingress Resource: Collection of rules for the inbound traffic to reach Services.
These are Layer 7 (L7) rules that allow hostnames (and optionally paths) to
be directed to specific Services in Kubernetes.
• Ingress Controller: Acts upon the rules set by the Ingress Resource, typically
via an HTTP or L7 load balancer.
21. Deploy the NGINX Ingress Controller Helm using the following command
format:
For example:
NAME: demo-nginx-ingress
LAST DEPLOYED: Sun Jun 9 15:48:55 2019
NAMESPACE: demo
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/ClusterRoleBinding
NAME AGE
demo-nginx-ingress 0s
==> v1beta1/Role
demo-nginx-ingress 0s
108 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.2. Deploying and configuring Documentum Server on Google Cloud Platform
==> v1/Service
demo-nginx-ingress-controller 0s
demo-nginx-ingress-default-backend 0s
==> v1/ConfigMap
demo-nginx-ingress-controller 0s
==> v1/ServiceAccount
demo-nginx-ingress 0s
==> v1beta1/ClusterRole
demo-nginx-ingress 0s
==> v1beta1/RoleBinding
demo-nginx-ingress 0s
==> v1beta1/Deployment
demo-nginx-ingress-controller 0s
demo-nginx-ingress-default-backend 0s
==> v1/Pod(related)
...
NAME READY...
demo-nginx-ingress-controller-78cd47cf46-cw9q2 0/1...
demo-nginx-ingress-default-backend-5d47879fb7-5lptf 0/1...
The nginx-ingress controller is installed. It may take a few minutes for the
LoadBalancer IP to be available.
22. Verify the status of the NGINX Ingress Controller Helm deployment using the
following command format:
apiVersion: extensions/vibeta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: test
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
serviceport: 80
path: /
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 109
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
Ingress:
tls:
- hosts:
- www.example.com
secretName: example-tls
23. (Optional) If TLS is enabled for the Ingress, you must provide a Secret
containing the certificate and key using the following command format:
apiversion: v1
kind: Secret
metadata:
name: example-tls
namespace: test
data:
tls.cert: <base64 encoded certificate>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
24. Install a single host simple fan-out Ingress resource Helm to route the traffic to
the cluster-internal services using the following command format:
For example:
NAME: dctm-common-ingress
LAST DEPLOYED: Sun Jun 9 15:55:47 2019
NAMESPACE: demo
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Ingress
NAME AGE
dctm-common-ingress 0s
25. The Ingress Resource determines the controller that is utilized to serve traffic.
Set the Ingress annotation to select the NGINX ingress controller. This is set
with an annotation, kubernetes.io/ingress.class, in the metadata section of
the Ingress Resource.
For example:
annotations:
kubernetes.io/ingress.class nginx
#Source: documentum-ingress/templates/ingress.yaml
#Single Host path based fan-out Ingress
apiVersion: extensions/v1beta1
kind: Ingress
110 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.2. Deploying and configuring Documentum Server on Google Cloud Platform
metadata:
name: dctm-common-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "1200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1200"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1200"
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
# -host: <ingress-host>
- http:
paths:
- backend:
serviceName: documentum-apphost-cip-service
servicePort: 8080
path: /
- backend:
serviceName: documentum-xda
servicePort: 7000
path: /xda
- backend:
serviceName: documentum-da
servicePort: 8080
path: /da
- backend:
serviceName: documentum-webtop
servicePort: 9000
path: /webtop
- backend:
serviceName: documentum-server-jms-service
servicePort: 9080
path: /DmMethods
- backend:
serviceName: documentum-server-jms-services
servicePort: 9080
path: /bpm
- backend:
serviceName: documentum-server-jms-service
servicePort: 9080
path: /dmotdsrest
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 111
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
27. Perform the following to access the Documentum applications deployed inside
the Kubernetes environment from outside:
a. Obtain the external IP address of the load balancer service of the NGINX
Ingress controller using the following command format:
Depending on the Ingress resource rule you have set in Step 26 for accessing
Documentum Webtop, the URL redirects you to the Documentum Webtop login
page.
4.2.3 Limitations
• External storage provisioners limitation. For deploying Documentum products,
you need the capability to dynamically provision both ReadWriteOnce (RWO)
Persistent Volumes (PVs) and ReadWriteMany (RWX) Persistent Volumes (PVs).
Cloud providers provide the built-in provisioner to dynamically provision the
ReadWriteOnce Persistent Volumes. For example, in Google Cloud Platform, if
you specify the storageclass as standard in the Persistent volume Claim (PVC),
then Google Cloud Platform automatically creates a Persistent Volume of
requested size using Google Compute Engine Persistent Disk. However, the
Google Compute Engine Persistent Disk does not support ReadWriteMany
operation, and hence you have to provision a Google Cloud Filestore instance
and external provisioner to dynamically manage the Persistent Volumes for
ReadWriteMany Persistent Volume Claims.
• In order to achieve ingress in Google Cloud Platform or GKE Kubernetes cluster,
Google Cloud Platform Google Cloud Load Balancer (GCLB) L7 Load balancer is
not used as this Google Cloud Platform GCBL L7 load balancer does not
communicate to services of the ClusterIP type on the backend. Use NGINX
Ingress controller to achieve Ingress in a GKE cluster. Google Documentation
contains more information about Ingress with NGINX controller on GKE.
112 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.3. Deploying and configuring Documentum Administrator on Google Cloud Platform
4.2.4 Troubleshooting
There are no troubleshooting information for this release.
3. Enable the ingress resource rule for Documentum Administrator using the
following command format:
4. Obtain the external IP address of the load balancer service of the NGINX
Ingress controller using the following command format:
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 113
EDCSYCD160700-IGD-EN-02
Chapter 4 Deploying Documentum Platform and Platform Extensions applications on
Google Cloud Platform
4.3.3 Limitations
There are no limitations for this release.
4.3.4 Troubleshooting
There are no troubleshooting information for this release.
spec:
rules:
- http:
paths:
- backend:
serviceName: <Documentum REST Services service name>
servicePort: 8080
path: /dctm-rest
114 OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide
EDCSYCD160700-IGD-EN-02
4.4. Deploying and configuring Documentum REST Services on Google Cloud Platform
3. Enable the ingress resource rule for Documentum REST Services using the
following command format:
4. Obtain the external IP address of the load balancer service of the NGINX
Ingress controller using the following command format:
4.4.3 Limitations
There are no limitations for this release.
4.4.4 Troubleshooting
There are no troubleshooting information for this release.
OpenText Documentum Platform and Platform Extensions – Cloud Deployment Guide 115
EDCSYCD160700-IGD-EN-02