DDM 991 AdministratorGuide en
DDM 991 AdministratorGuide en
9.9.1
Administrator Guide
Informatica Dynamic Data Masking Administrator Guide
9.9.1
June 2019
© Copyright Informatica LLC 1993, 2019
This software and documentation contain proprietary information of Informatica LLC and are provided under a license agreement containing restrictions on use and
disclosure and are also protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any
form, by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica LLC. This Software may be protected by U.S. and/or
international Patents and other Patents Pending.
Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as
provided in DFARS 227.7202-1(a) and 227.7702-3(a) (1995), DFARS 252.227-7013©(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III),
as applicable.
The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to
us in writing.
Informatica, Informatica Platform, Informatica Data Services, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange,
PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data Explorer, Informatica B2B Data Transformation, Informatica B2B Data Exchange Informatica
On Demand, Informatica Identity Resolution, Informatica Application Information Lifecycle Management, Informatica Complex Event Processing, Ultra Messaging,
Informatica Master Data Management, and Live Data Map are trademarks or registered trademarks of Informatica LLC in the United States and in jurisdictions
throughout the world. All other company and product names may be trade names or trademarks of their respective owners.
Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights
reserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rights
reserved. Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright © Meta
Integration Technology, Inc. All rights reserved. Copyright © Intalio. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems Incorporated.
All rights reserved. Copyright © DataArt, Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All rights
reserved. Copyright © Rogue Wave Software, Inc. All rights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights reserved.
Copyright © Glyph & Cog, LLC. All rights reserved. Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights reserved. Copyright
© Information Builders, Inc. All rights reserved. Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo
Communications, Inc. All rights reserved. Copyright © International Organization for Standardization 1986. All rights reserved. Copyright © ej-technologies GmbH. All
rights reserved. Copyright © Jaspersoft Corporation. All rights reserved. Copyright © International Business Machines Corporation. All rights reserved. Copyright ©
yWorks GmbH. All rights reserved. Copyright © Lucent Technologies. All rights reserved. Copyright © University of Toronto. All rights reserved. Copyright © Daniel
Veillard. All rights reserved. Copyright © Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright © MicroQuill Software Publishing, Inc. All rights reserved.
Copyright © PassMark Software Pty Ltd. All rights reserved. Copyright © LogiXML, Inc. All rights reserved. Copyright © 2003-2010 Lorenzi Davide, All rights reserved.
Copyright © Red Hat, Inc. All rights reserved. Copyright © The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright © EMC
Corporation. All rights reserved. Copyright © Flexera Software. All rights reserved. Copyright © Jinfonet Software. All rights reserved. Copyright © Apple Inc. All rights
reserved. Copyright © Telerik Inc. All rights reserved. Copyright © BEA Systems. All rights reserved. Copyright © PDFlib GmbH. All rights reserved. Copyright ©
Orientation in Objects GmbH. All rights reserved. Copyright © Tanuki Software, Ltd. All rights reserved. Copyright © Ricebridge. All rights reserved. Copyright © Sencha,
Inc. All rights reserved. Copyright © Scalable Systems, Inc. All rights reserved. Copyright © jQWidgets. All rights reserved. Copyright © Tableau Software, Inc. All rights
reserved. Copyright© MaxMind, Inc. All Rights Reserved. Copyright © TMate Software s.r.o. All rights reserved. Copyright © MapR Technologies Inc. All rights reserved.
Copyright © Amazon Corporate LLC. All rights reserved. Copyright © Highsoft. All rights reserved. Copyright © Python Software Foundation. All rights reserved.
Copyright © BeOpen.com. All rights reserved. Copyright © CNRI. All rights reserved.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/), and/or other software which is licensed under various
versions of the Apache License (the "License"). You may obtain a copy of these Licenses at http://www.apache.org/licenses/. Unless required by applicable law or
agreed to in writing, software distributed under these Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software
copyright © 1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License
Agreement, which may be found at http:// www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, "as-is", without warranty of any
kind, either express or implied, including but not limited to the implied warranties of merchantability and fitness for a particular purpose.
The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California,
Irvine, and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and
redistribution of this software is subject to terms available at http://www.openssl.org and http://www.openssl.org/source/license.html.
This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <daniel@haxx.se>. All Rights Reserved. Permissions and limitations regarding this
software are subject to terms available at http://curl.haxx.se/docs/copyright.html. Permission to use, copy, modify, and distribute this software for any purpose with or
without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http://www.dom4j.org/ license.html.
The product includes software copyright © 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to
terms available at http://dojotoolkit.org/license.
This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations
regarding this software are subject to terms available at http://source.icu-project.org/repos/icu/icu/trunk/license.html.
This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at
http:// www.gnu.org/software/ kawa/Software-License.html.
This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & Wireless
Deutschland. Permissions and limitations regarding this software are subject to terms available at http://www.opensource.org/licenses/mit-license.php.
This product includes software developed by Boost (http://www.boost.org/) or under the Boost software license. Permissions and limitations regarding this software
are subject to terms available at http:/ /www.boost.org/LICENSE_1_0.txt.
This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at
http:// www.pcre.org/license.txt.
This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http:// www.eclipse.org/org/documents/epl-v10.php and at http://www.eclipse.org/org/documents/edl-v10.php.
This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://
www.stlport.org/doc/ license.html, http://asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://
httpunit.sourceforge.net/doc/ license.html, http://jung.sourceforge.net/license.txt , http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/
release/license.html, http://www.libssh2.org, http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/
license-agreements/fuse-message-broker-v-5-3- license-agreement; http://antlr.org/license.html; http://aopalliance.sourceforge.net/; http://www.bouncycastle.org/
licence.html; http://www.jgraph.com/jgraphdownload.html; http://www.jcraft.com/jsch/LICENSE.txt; http://jotm.objectweb.org/bsd_license.html; . http://www.w3.org/
Consortium/Legal/2002/copyright-software-20021231; http://www.slf4j.org/license.html; http://nanoxml.sourceforge.net/orig/copyright.html; http://www.json.org/
license.html; http://forge.ow2.org/projects/javaservice/, http://www.postgresql.org/about/licence.html, http://www.sqlite.org/copyright.html, http://www.tcl.tk/
software/tcltk/license.html, http://www.jaxen.org/faq.html, http://www.jdom.org/docs/faq.html, http://www.slf4j.org/license.html; http://www.iodbc.org/dataspace/
iodbc/wiki/iODBC/License; http://www.keplerproject.org/md5/license.html; http://www.toedter.com/en/jcalendar/license.html; http://www.edankert.com/bounce/
index.html; http://www.net-snmp.org/about/license.html; http://www.openmdx.org/#FAQ; http://www.php.net/license/3_01.txt; http://srp.stanford.edu/license.txt;
http://www.schneier.com/blowfish.html; http://www.jmock.org/license.html; http://xsom.java.net; http://benalman.com/about/license/; https://github.com/CreateJS/
EaselJS/blob/master/src/easeljs/display/Bitmap.js; http://www.h2database.com/html/license.html#summary; http://jsoncpp.sourceforge.net/LICENSE; http://
jdbc.postgresql.org/license.html; http://protobuf.googlecode.com/svn/trunk/src/google/protobuf/descriptor.proto; https://github.com/rantav/hector/blob/master/
LICENSE; http://web.mit.edu/Kerberos/krb5-current/doc/mitK5license.html; http://jibx.sourceforge.net/jibx-license.html; https://github.com/lyokato/libgeohash/blob/
master/LICENSE; https://github.com/hjiang/jsonxx/blob/master/LICENSE; https://code.google.com/p/lz4/; https://github.com/jedisct1/libsodium/blob/master/
LICENSE; http://one-jar.sourceforge.net/index.php?page=documents&file=license; https://github.com/EsotericSoftware/kryo/blob/master/license.txt; http://www.scala-
lang.org/license.html; https://github.com/tinkerpop/blueprints/blob/master/LICENSE.txt; http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/
intro.html; https://aws.amazon.com/asl/; https://github.com/twbs/bootstrap/blob/master/LICENSE; https://sourceforge.net/p/xmlunit/code/HEAD/tree/trunk/
LICENSE.txt; https://github.com/documentcloud/underscore-contrib/blob/master/LICENSE, and https://github.com/apache/hbase/blob/master/LICENSE.txt.
This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and
Distribution License (http://www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php), the Sun Binary
Code License Agreement Supplemental License Terms, the BSD License (http:// www.opensource.org/licenses/bsd-license.php), the new BSD License (http://
opensource.org/licenses/BSD-3-Clause), the MIT License (http://www.opensource.org/licenses/mit-license.php), the Artistic License (http://www.opensource.org/
licenses/artistic-license-1.0) and the Initial Developer’s Public License Version 1.0 (http://www.firebirdsql.org/en/initial-developer-s-public-license-version-1-0/).
This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this
software are subject to terms available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab.
For further information please visit http://www.extreme.indiana.edu/.
This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject
to terms of the MIT license.
DISCLAIMER: Informatica LLC provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied
warranties of noninfringement, merchantability, or use for a particular purpose. Informatica LLC does not warrant that this software or documentation is error free. The
information provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation
is subject to change at any time without notice.
NOTICES
This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software
Corporation ("DataDirect") which are subject to the following terms and conditions:
1. THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES
OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH
OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.
Revision: 1
Publication Date: 2019-06-27
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Informatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Informatica Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Informatica Product Availability Matrices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Informatica Velocity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Informatica Marketplace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 2: Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Authentication Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
LDAP Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Active Directory Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Internal Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Admin User Name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Setting Up Authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chapter 3: Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Target Database Credentials Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Default Keystore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Custom Keystore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Managing Keystores and Aliases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4 Table of Contents
SSL Communication in Dynamic Data Masking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Enable SSL Communication in Dynamic Data Masking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Enabling SSL Communication in Dynamic Data Masking. . . . . . . . . . . . . . . . . . . . . . . . . . 30
Keystore Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Truststore Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Key Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Trust Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Protocols and Ciphers Suites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Reloading the Security Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Example SSL Configuration Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Kerberos Authentication for Hive or Impala Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Configuring Kerberos Encryption for Hive Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Configuring the ddm.security file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Client Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
JAAS Configuration Options in the ddm.security File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Table of Contents 5
Microsoft SQL Server Dynamic Data Masking Administrator Required Privileges. . . . . . . . . . 65
Netezza Connection Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Oracle Connection Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Oracle Connection Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Oracle Dynamic Data Masking Administrator Required Privileges. . . . . . . . . . . . . . . . . . . . 68
Using DBLink. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Changing the Listener Port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Configuring the Oracle Target Database Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Define the CURRENT_SCHEMA Symbol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
PostgreSQL Connection Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Configuring the PostgreSQL Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
PostgreSQL Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
PostgreSQL Dynamic Data Masking Administrator Required Privileges. . . . . . . . . . . . . . . . 72
Defining the CURRENT_PATH Symbol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Sybase Connection Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Sybase Connection Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Sybase Dynamic Data Masking Administrator Required Privileges. . . . . . . . . . . . . . . . . . . . 74
Search and Replace Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Teradata Connection Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Teradata Connection Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Teradata Dynamic Data Masking Administrator Required Privileges. . . . . . . . . . . . . . . . . . 77
Configuring the Teradata Drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6 Table of Contents
Chapter 7: Configuration for MicroStrategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Configuration for MicroStrategy Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Configure Dynamic Data Masking to Capture MicroStrategy User Context. . . . . . . . . . . . . . . 93
Chapter 9: Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Logs Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Audit Trail and Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Audit Trail Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Loggers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
System Loggers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Custom Loggers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Loggers Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Appenders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Rolling File Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Syslog Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
SMTP Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
SNMP Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Creating an Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Custom Appender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Log Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Setting the Log Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
JDBC Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Configure JDBC Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
ODBC Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Set the odbcLogConfig.properties Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Configure ODBC Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Table of Contents 7
Teradata COP Discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
COP Discovery through the Teradata Driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
COP Discovery in Dynamic Data Masking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Implementation Scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Hive Connectivity with Apache ZooKeeper and Dynamic Data Masking . . . . . . . . . . . . . . . . . . 131
Configuring Dynamic Data Masking for Apache ZooKeeper. . . . . . . . . . . . . . . . . . . . . . . 131
Configuring the Hive Database for ZooKeeper on the Dynamic Data Masking Server. . . . . . . 133
Configuring the JDBC Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Connecting the Dynamic Data Masking Server to ZooKeeper on Server Startup. . . . . . . . . . 134
Dynamic Data Masking Server Commands for ZooKeeper. . . . . . . . . . . . . . . . . . . . . . . . 135
Restoration of Broken Connections to ZooKeeper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8 Table of Contents
Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
SetDBPassword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Sync. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
SetKeyStore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Server Service Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Table of Contents 9
Preface
The Informatica Dynamic Data Masking Administrator Guide contains information to help administrators
manage and configure Dynamic Data Masking. This guide assumes that you have knowledge of your
operating systems and relational database systems, which includes the database engines, flat files, and
mainframe systems in your environment.
Informatica Resources
Informatica provides you with a range of product resources through the Informatica Network and other online
portals. Use the resources to get the most from your Informatica products and solutions and to learn from
other Informatica users and subject matter experts.
Informatica Network
The Informatica Network is the gateway to many resources, including the Informatica Knowledge Base and
Informatica Global Customer Support. To enter the Informatica Network, visit
https://network.informatica.com.
To search the Knowledge Base, visit https://search.informatica.com. If you have questions, comments, or
ideas about the Knowledge Base, contact the Informatica Knowledge Base team at
KB_Feedback@informatica.com.
Informatica Documentation
Use the Informatica Documentation Portal to explore an extensive library of documentation for current and
recent product releases. To explore the Documentation Portal, visit https://docs.informatica.com.
10
Informatica maintains documentation for many products on the Informatica Knowledge Base in addition to
the Documentation Portal. If you cannot find documentation for your product or product version on the
Documentation Portal, search the Knowledge Base at https://search.informatica.com.
If you have questions, comments, or ideas about the product documentation, contact the Informatica
Documentation team at infa_documentation@informatica.com.
Informatica Velocity
Informatica Velocity is a collection of tips and best practices developed by Informatica Professional Services
and based on real-world experiences from hundreds of data management projects. Informatica Velocity
represents the collective knowledge of Informatica consultants who work with organizations around the
world to plan, develop, deploy, and maintain successful data management solutions.
You can find Informatica Velocity resources at http://velocity.informatica.com. If you have questions,
comments, or ideas about Informatica Velocity, contact Informatica Professional Services at
ips@informatica.com.
Informatica Marketplace
The Informatica Marketplace is a forum where you can find solutions that extend and enhance your
Informatica implementations. Leverage any of the hundreds of solutions from Informatica developers and
partners on the Marketplace to improve your productivity and speed up time to implementation on your
projects. You can find the Informatica Marketplace at https://marketplace.informatica.com.
To find your local Informatica Global Customer Support telephone number, visit the Informatica website at
the following link:
https://www.informatica.com/services-and-training/customer-success-services/contact-us.html.
To find online support resources on the Informatica Network, visit https://network.informatica.com and
select the eSupport option.
Preface 11
Chapter 1
As an administrator, you use the Server Control program to manage the Dynamic Data Masking Server and
start and stop the Dynamic Data Masking services. You use the Management Console to configure target
databases, define listener ports, manage target databases, maintain system logs, and define rules. The
administrative tasks that you perform help to ensure that Dynamic Data Masking operates effectively and
efficiently.
Note: To prevent loss of data, such as connection rules, security rule sets, and database configurations,
perform regular backups of the entire Dynamic Data Masking directory to a location on your network or an
external storage location.
12
Dynamic Data Masking Architecture
Dynamic Data Masking acts as a security layer between the application and the database to protect sensitive
data stored in the database. The Dynamic Data Masking Server intercepts SQL requests sent to the database
and uses a set of connection and security rules to determine how to process the request.
The following figure shows the architecture of Dynamic Data Masking and how the Dynamic Data Masking
Server relates to the application and the database:
The Dynamic Data Masking Server listens on the port where the application sends database requests. When
the application sends a request to the database, the Dynamic Data Masking Server receives the request
before it goes to the database. The Rule Engine uses the connection rules and security rules to determine the
action to perform on the incoming request. The Dynamic Data Masking service sends the modified request to
the database. The database processes the request and sends the results back to the application.
Dynamic Data Masking provides management tools that you can use to manage the Dynamic Data Masking
Server and set up connection and security rules. With the Server Control tool, you can start, stop, and manage
the Dynamic Data Masking Server. On the Management Console, you can configure and manage the Dynamic
Data Masking services and create and manage connection and security rules.
The Dynamic Data Masking Server provides services and resources to intercept database requests and
perform data masking tasks.
The Dynamic Data Masking service listens on the listener port to monitor and route incoming database
requests.
Service Description
DDM for Azure Listens for and routes database requests for a Microsoft Azure SQL database. The service
supports the SSL mode of communication.
DDM for DB2 Listens for and routes database requests for an IBM Db2 database. The service supports SSL
and non-SSL modes of communication.
DDM for FAS Listens for and routes database requests for Data Vault. The service supports SSL and non-
SSL modes of communication.
DDM for Hive Listens for and routes database requests for a Hive database. The service supports SSL and
non-SSL modes of communication as well as Kerberos Authentication and Kerberos
encrypted data.
DDM for Hive Listens for and routes database requests for Hive databases using HTTP transport. The
HTTP service supports SSL and non-SSL modes of communication, and Kerberos Authentication.
DDM for Impala Listens for and routes database requests for an Impala database. The service supports SSL
and non-SSL modes of communication as well as Kerberos Authentication.
DDM for Informix Listens for and routes database requests in Informix native protocol to Informix databases.
DDM for Informix Listens for and routes database requests in Distributed Relational Database Architecture
(DRDA) protocol to Informix databases.
DDM for JDBC Listens for database requests for a database that uses JDBC connectivity.
DDM for ODBC Listens for database requests for a database that uses ODBC connectivity.
DDM for Oracle Listens for and routes database requests for an Oracle database. The service supports SSL
and non-SSL modes of communication.
DDM for Listens for and routes database requests for a PostgreSQL database.
PostgreSQL
DDM for SQL Listens for and routes database requests for a Microsoft SQL Server database. The service
Server supports SSL and non-SSL modes of communication.
DDM for Sybase Listens for and routes database requests for a Sybase database.
DDM for Listens for and routes database requests for a Teradata database.
Teradata
Rule Engine
The Rule Engine evaluates incoming database requests and applies connection and security rules to
determine how to route requests and mask data. The Rule Engine can modify the database request
based on the rules defined in the Dynamic Data Masking Server.
• Connection rule. Defines the conditions and actions that the Rule Engine applies to determine how to
route a database connection request received from an application.
• Security rule. Contains the conditions and actions that define what to do with the database SQL
request and how to apply SQL rewrites that manipulate the returned SQL result set.
Server Control
Server Control is a command line program that you use to configure and manage the Dynamic Data
Masking Server. Use Server Control to start or stop the Dynamic Data Masking Server and services or to
change the port number or password for the Dynamic Data Masking Server.
Management Console
The Management Console is a client application that you use to manage the Dynamic Data Masking
Server. You can use the Management Console to create and manage rules and to configure and manage
connections to databases.
The application sends a request to the database. As a proxy server, the Dynamic Data Masking Server
intercepts the request and the Rule Engine evaluates the request before it sends the request to the database.
Dynamic Data Masking uses the following process to apply data masking to a database request:
1. The Dynamic Data Masking service listens on the listener port for requests sent to the database.
When the application sends a database connection request, the Dynamic Data Masking service receives
the request instead of the database.
2. The Rule Engine uses a connection rule to determine how to process the incoming connection request.
The connection rule defines the criteria to identify and route the database request. If the database
request matches the criteria, the Rule Engine determines the action to perform on the request. The
connection rule action can include routing the connection to a specified database, host, and port, and
applying a security rule set. The connection rule can block the connection request. If the database has a
direct action, the connection rule can return a redirect request back to the application with the database
host and port and the application can connect directly to the database.
For example, you can define an Informatica ETL process or batch in Dynamic Data Masking that
bypasses Dynamic Data Masking and reduces overhead on internal processes.
The salary_rules rule set applies a masking function to all requests that reference employee salaries. The rule
set restricts access to management salaries and masks the first three digits of the employee salary column.
The Dynamic Data Masking service intercepts the incoming SQL request, identifies that the request
references the salary tables, rewrites it with the masking function, and sends the rewritten request to the
database. The database receives the request and sends masked data back to the application through the
Dynamic Data Masking service.
Use the following general guidelines when you set up Dynamic Data Masking:
Production Environments
Privileged users, such as production database administrators, technical support personnel, and quality
assurance teams, access personally identifiable information as a daily part of their jobs.
Non-Production Environments
Dynamic Data Masking eliminates exposure of sensitive data without affecting the ability of users to perform
their job functions.
After installation, complete the following steps to set up Dynamic Data Masking:
1. Log in to the Management Console. Use the user name admin and the Dynamic Data Masking Server
password to log in to the Management Console for the first time.
2. Optionally, change the default Internal authentication scheme. You can use LDAP or Active Directory
authentication to authorize a list of Dynamic Data Masking administrators.
3. Create a Dynamic Data Masking service. Configure the listener port number to match the port number
where the client sends requests to the database.
4. Define the database connection properties for the database that requires data masking.
5. Create a connection rule. Configure the rule to identify the database requests that must be masked.
Assign a database and a security rule set to the connection rule set.
6. Create a security rule set. Define the rules for masking the data sent back to the application.
Dynamic Data Masking applies the security rule set to SQL requests from a client or application that initiates
a connection that uses the Dynamic Data Masking listener port. When you modify rules within the security
rule set, Dynamic Data Masking immediately applies the modified rules on new SQL requests.
Management Console
The Management Console is the client component of the Dynamic Data Masking Server.
You can install the Management Console on a remote machine or the local system to manage the Dynamic
Data Masking service. Use the Management Console to manage and configure domains and Dynamic Data
Masking services, define connection rules for Dynamic Data Masking services, define security rules, and
configure target databases.
Management Console 17
Logging In to the Management Console
You can access to the Dynamic Data Masking components through the Management Console. Log in to the
Management Console to manage target databases, configure listener ports, and define rules.
To log in to the Management Console, you need the server address and port number of the server that
Dynamic Data Masking operates on and the administrator credentials.
1. On Windows 7 and earlier, select Start > All Programs > Informatica > Dynamic Data Masking >
Management Console.
On Windows 10 and later, select Search the Web and Windows > All apps > Informatica Dynamic Data
Masking > Management Console.
The Login window appears.
2. Verify that the Server Host and Port display the correct information for the Dynamic Data Masking
Server.
3. Enter the Dynamic Data Masking administrator user name and password. If you use LDAP
authentication, the user name must be in LDAP format. Click Connect.
A tree is visible in the Management Console after you login successfully.
You must have the X Window server installed on the machine that you use to log in to the Management
Console.
1. Open a terminal and navigate to the Dynamic Data Masking installation directory.
For example, you might enter the following command:
cd /home/Informatica/DDM
2. Run the following command to start the Management Console:
./mng
The Login window appears.
3. Verify that the Server Host and Port display the correct information for the Dynamic Data Masking
Server.
4. Enter the Dynamic Data Masking administrator user name and password. If you use LDAP
authentication, the user name must be in LDAP format. Click Connect.
A tree is visible in the Management Console after you log in.
Database Management
A database node contains references to databases. The Dynamic Data Masking Server controls access to the
databases that the database nodes reference.
A database node can reference an Oracle, Microsoft SQL Server, Db2, Informix, Sybase, Data Vault, Hive, or
Teradata database. The Management Console tree can contain an unlimited number of database nodes. You
The Management Console contains one server node. Each Dynamic Data Masking instance associates with
one Dynamic Data Masking Server. You connect to a server when you log into the Management Console. The
Dynamic Data Masking Server manages databases located under a parent domain or all sub domains of the
server node in the tree.
The server node has a domain node parent. The server node can have Dynamic Data Masking service child
nodes. You can edit and move the server node.
Note: You cannot add or remove the Dynamic Data Masking Server node with the Add or Remove options in
the Management Console menu.
The Dynamic Data Masking Server can contain single service nodes for each database. Create service nodes
under the server node. Service nodes cannot have child nodes. You can add, edit, and remove service nodes.
Each Dynamic Data Masking service routes requests to a specific type of database. For example, the
Dynamic Data Masking for Oracle service routes requests to Oracle databases and the Dynamic Data
Masking for DB2 service routes requests to Db2 databases.
You must configure the database listener port to forward connections to the Dynamic Data Masking listener
port. How you configure the listener ports depends on whether the Dynamic Data Masking service runs on the
database server or on a standalone server. You can define the listener port that the Dynamic Data Masking
service uses through the Services Editor in the Management Console.
If the Dynamic Data Masking service runs on a standalone server, you must route application connection
requests to the Dynamic Data Masking listener port.
1. In the Management Console, right-click on a Dynamic Data Masking service and select Edit.
The Service Editor appears.
1. In the Management Console, right-click the Dynamic Data Masking service and select Edit.
The Service Editor appears.
2. Select the port to delete.
3. Click Remove port.
4. Click OK.
Configuration Management
The Dynamic Data Masking configuration files store information about the Management Console tree nodes
and user access.
You can find the Dynamic Data Masking configuration files in the following location:
<Dynamic Data Masking installation>/cfg
The config.properties file contains the configuration parameters of the Dynamic Data Masking Server and
service nodes in the Management Console.
The config.cfg file contains information on domain, database, and security rule set nodes in the
Management Console. The config.cfg file is binary and you must not edit it. The config.cfg file and the
public key file, config.pbk, have a digital signature. Dynamic Data Masking updates the digital signature
when a user performs an operation on the Dynamic Data Masking Server through the Management Console.
If Dynamic Data Masking cannot verify the public key against the configuration file, the Dynamic Data
Masking Server writes an error message to the server.log file and does not start.
When the Dynamic Data Masking Server intercepts a client command, it might access client objects, such as
tables, views, and stored procedures. If the administrator that created the database connection in Dynamic
Data Masking cannot access the client objects, the query can return incorrect or unmasked data.
Authentication
This chapter includes the following topics:
• Authentication Overview, 21
• LDAP Authentication, 21
• Active Directory Authentication, 22
• Internal Authentication, 23
• Setting Up Authentication, 23
Authentication Overview
When you use the Management Console to connect to the Dynamic Data Masking Server, you must log in with
a user name and password. Dynamic Data Masking user authentication depends on the authentication
scheme that you configure.
You can configure the following authentication schemes for Dynamic Data Masking:
• LDAP authentication. Authentication scheme that uses the LDAP software protocol to locate
organizations, groups, individuals, and other resources in a network.
• Active Directory authentication. Authentication scheme that uses the Active Directory service to enforce
security for users and resources in a Windows network.
• Internal authentication. Authentication scheme that uses the Dynamic Data Masking Server password to
authenticate users who log in to the Management Console.
Use the Management Console to set the authentication scheme for Dynamic Data Masking. By default, the
Management Console uses internal authentication. The first time you log in to the Management Console, you
enter the user name and the server password that you created when you installed the Dynamic Data Masking
Server.
LDAP Authentication
You can use LDAP authentication to authenticate users who log in to the Management Console.
If you use LDAP authentication, you must provide the user name in LDAP format when you log in to the
Management Console.
21
The following table describes the LDAP properties you configure if you use the LDAP authentication scheme:
Property Description
Administration Group Distinguished name (DN) of the LDAP group to use for authentication.
Root DN Base distinguished name (DN) of the LDAP domain in which to begin user
lookups.
User Object Class Object class type used for the LDAP user object.
User MemberOf Attribute Locate users by attribute. Name of the user attribute to use to identify the
groups associated with a user.
Group Object Class Object class type used for the LDAP group object.
Group MemberOf Attribute Locate users by attribute. Name of the group attribute to use to identify the
groups associated with a user.
Group Members Attribute Locate users by group. Name of the attribute to use to identify the members of a
group.
The following table describes the properties you configure if you use the Active Directory authentication
scheme:
Property Description
Domain Server Port Port number for the Active Directory server.
Domain Name of the domain name that contains the Active Directory server
Administration Group Name of the Active Directory group to use for authentication.
22 Chapter 2: Authentication
Internal Authentication
Dynamic Data Masking provides a simple authentication scheme that authenticates a user based on the
Dynamic Data Masking Server password.
When you install the Dynamic Data Masking Server, you must create a password for the server. Dynamic Data
Masking uses the server password for authentication when you initially log in to the Management Console.
The Management Console login requires a user name and password. The Internal authentication scheme
uses only the password for authentication. It uses the user name to identify the user and to track user activity
in the Management Console.
You can use the Server Control command SetInternalPassword to change the Dynamic Data Masking Server
password at any time.
The admin user name supercedes the type of authentication configured for the Dynamic Data Masking. You
can log in to the Management Console with the admin user name at any time. If you log in to the Management
Console with the admin user name, Dynamic Data Masking authenticates based on the server password even
if the Dynamic Data Masking is configured to use LDAP or Active Directory authentication.
When you log in as admin, Dynamic Data Masking uses internal authentication only for the session. It does
not permanently change the authentication scheme configured for Dynamic Data Masking. When you log out,
you can log back in with an LDAP or Active Directory user account.
If Dynamic Data Masking uses LDAP authentication and the configuration of the LDAP server or network
changes, you might not be able to log in to the Management Console. You must update the LDAP server
attributes in the Management Console for LDAP authentication to work properly. Log in to the Management
Console with the admin user name and server password and update the attributes of the LDAP server. The
next time you log in, you can use LDAP authentication.
Setting Up Authentication
Use the Management Console to set up the authentication system for Dynamic Data Making.
To set the authentication for the first time, you must log in with a user name and the server password. You
set the initial server password when you install the Dynamic Data Masking Server.
Internal Authentication 23
• For the LDAP authentication scheme, enter the LDAP domain, group, and user properties. Consult the
LDAP administrator to get the correct information.
• For the Active Directory authentication scheme, enter the Active Directory domain properties and the
administration group. Consult the Active Directory administrator to get the correct information.
• For the Internal authentication scheme, you do not need to enter any information. Use the Server
Control command SetInternalPassword to manage the Dynamic Data Masking Server password.
6. Click OK.
Log out and log in again to the Management Console to verify that the selected authentication scheme is in
effect.
24 Chapter 2: Authentication
Chapter 3
Security
This chapter includes the following topics:
The keystore and security provider protect database credentials and prevent unauthorized access to the
target database. When you configure a database, you can choose to use either the default keystore or a
custom keystore and security provider. If you choose the default keystore, Dynamic Data Masking also uses
the default security provider. Both the default keystore and default security provider are preconfigured for
use in Dynamic Data Masking. If you choose a custom keystore and security provider, you must configure
Dynamic Data Masking for use with the custom keystore and security provider.
25
Default Keystore
The default keystore and security provider are preconfigured for use with any database supported by
Dynamic Data Masking.
The default keystore is a JCEKS-type keystore that permits both read and write operations. If the keystore
does not already exist, it is created in the following location when the Dynamic Data Masking Server starts:
<DDM>/cfg/ddm.jceks
When you configure the target database, you can select the default keystore option and then enter the
database user name and password. When you save the database object, an alias is automatically generated
and saved in the keystore along with the database credentials. The Dynamic Data Masking Server reads the
database credentials from the keystore to create an internal connection to the database. The alias is not
visible in the database form, and the Dynamic Data Masking Server never sends the credentials to the client
or outside of the Dynamic Data Masking Server.
Dynamic Data Masking upgrades each database object in the following process:
Custom Keystore
You can use a custom keystore and security provider to store and access the target database credentials. To
use a custom keystore and security provider, you must create an XML configuration file called ddm.security.
If you want to use CyberArk as a security provider, you must also create a CyberArk properties file. Then you
can create the target database connection.
ddm.security File
The ddm.security file contains the information used to define the custom keystore and security provider. To
configure custom keystores and security providers, create the file in the following location: <DDM>/cfg/
ddm.security
Use the following parameters to configure the ddm.security file for the custom security provider:
Name Description
<fqcn> Mandatory. Fully-qualified class name of the security provider. For example:
com.security.provider.MyProvider
<file> Optional. Provider-specific initialization parameter. For example, the path to a configuration file.
26 Chapter 3: Security
Use the following parameters to configure the ddm.security file for the custom keystore:
storeName - Mandatory. Unique name of the keystore. Once you have defined the keystore name, do
not modify it.
storeType - Mandatory. Type of keystore. For CyberArk, enter the storeType as CyberArk.
provider - Optional. Name of the custom security provider that Provider.getName() returns. Note
that this is not the name of the class.
If the security provider is CyberArk, this parameter is mandatory. Provide the name of
the security provider. This name should match the provider.name property in the
CyberArk properties file.
encrypted false Optional. Specify a clear password for the keystore in the ddm.security file.
Dynamic Data Masking encrypts the password at run-time and sets encrypted=true in
the file.
After you configure the ddm.security file, start the Dynamic Data Masking Server. When you configure the
database object, enter the keystore name defined in the ddm.security file and the alias associated with the
database user name and password in the custom keystore. For CyberArk accounts, the alias name was
defined during creation of the CyberArk account.
Custom security providers can allow read-only or read and write access to the keystore. For a read-only
keystore, enter the existing alias.
<providers type="ArrayList">
<entry type="ProviderDescriptor">
<file>cfg\CyberArk_DDMQASafe.props</file>
<fqcn>com.informatica.security.jce.cyberark.CyberarkProvider</fqcn>
</entry>
</providers>
</XML>
You can configure multiple CyberArk entries in the ddm.security file with the same method.
Create the CyberArk properties file within the <DDM>/cfg/ folder. For example: <DDM>/cfg/CyberArk.props.
Provide the location of the file in the <file> parameter of the security provider section of the ddm.security
file.
Name Description
provider.name Name of the security provider. Name must match the <provider> tag in the keystore
section of the ddm.security file.
provider.client.appid Application ID. The application ID was created during the CyberArk installation.
28 Chapter 3: Security
Name Description
provider.folder.path Path from the root to the folder containing the given account. If you leave this parameter
blank, Dynamic Data Masking assumes the account is under the root.
Note: If you plan to use CyberArk as a security provider, you must put the CyberArk JavaPasswordSDK.jar file
into the <Dynamic Data Masking installation>/lib/ext directory to complete the integration. The
JavaPasswordSDK.jar file is located in the ApplicationPasswordSdk directory of the CyberArk AIM
installation (on Microsoft Windows) or the /opt/CARKaim/sdk/ directory (on Linux).
1. Launch Server Control from the Start menu on Microsoft Windows, or use the server shell script on
Linux or UNIX.
2. Enter the following syntax: server config setKeyStore -path <path> [-storeName <storeName> -
alias <alias>] | [-user <user> -password <password>]
Path is a required parameter. If you want to change the keystore from default to custom, both keystore
name and alias are required parameters. If you want to change the keystore from custom to default, both
user name and password are required parameters. Do not use both sets of parameters in the same
command invocation.
You can also set another alias in a database object. The alias must already exist in the designated
keystore. You can set the alias, the keystore name, or both.
When you enable SSL communication, you configure the cfg/ddm.security file for keystores and truststores
used by the Dynamic Data Masking Server. You also configure the cfg/client.security file for truststores
used by clients such as the Management Console and Server Control. Configuration parameters for the cfg/
client.security and cfg/ddm.security files are the same.
You also use the cfg/ddm.security file to configure key strategies and trust strategies. Key strategies are
required when Dynamic Data Masking uses multiple signed certificates to perform the handshake with
database clients. Trust strategies tell Dynamic Data Masking how to handle a certificate that does not exist
in the Dynamic Data Masking truststore and is therefore rejected by the trust manager.
Dynamic Data Masking supports various security protocol and cipher suites. You can define global settings
for security protocols and cipher suites, or you can configure protocols and ciphers that map to a specific
Dynamic Data Masking host and port.
You can enable SSL communication for Oracle, IBM Db2, and Microsoft SQL Server target databases.
After upgrade or installation, the Dynamic Data Masking Server generates a self-signed certificate in the file
cfg/ddm.jceks. By default, the Dynamic Data Masking Server is not configured with keystores and key
strategies. The Dynamic Data Masking Server uses the automatically generated self-signed certificate to
perform the SSL handshake.
The Dynamic Data Masking administration tools, for example the Management Console and command line
tools, are not preconfigured with truststores and trust strategies. By default, when SSL is enabled, the
administration tools accept any server certificates without SSL authentication of the certificate.
From the Management Console, verify that you have selected the SSL check box on the Add/Edit Database
form for the specific database that you want to enable SSL communication for. In the service configuration
form for the corresponding Dynamic Data Masking service, input "SSL" on the appropriate row of the Security
column.
1. To switch the Dynamic Data Masking Server administration port between SSL and clear, run the
following command:
30 Chapter 3: Security
server network edit [<host>:]<port>[;SSL]
• For example, to switch the administration port from clear to SSL:
server network edit 8195;SSL
or
server network edit 127.0.0.1:8195;SSL
• To switch the administration port from SSL to clear:
server network edit 8195
or
server network edit 127.0.0.1:8195
On Linux operating systems, you must enclose the server network edit parameter value in double
quotes. For example: server network edit "8195;SSL"
2. To view the current network settings, run the following server network command:
server network
Keystore Configuration
To configure SSL communication in Dynamic Data Masking, copy keystores from the target database to the
Dynamic Data Masking Server. Usually you can copy the database keystores without modifying them. After
you copy the keystores to the Dynamic Data Masking Server installation, configure the cfg/ddm.security file
to set the keystore parameters.
If a database certificate contains information that limits the area where you can use the certificate, you might
have to change the installation. For example, if a certificate is valid only on the database host, you can install
Dynamic Data Masking on the same host. Then you can copy the database keystore as-is to the Dynamic
Data Masking Server.
Dynamic Data Masking can use one key password for each keystore to read key entries. Dynamic Data
Masking tries to read as many key entries as possible using the given key password, and skips entries with
another key password. If you do not specify the parameter keyPassword in the cfg/ddm.security file,
Dynamic Data Masking tries to read all of the key entries using an empty password and then the keystore
password. If all tries fail, Dynamic Data Masking logs a message that it cannot use that keystore.
The storePassword and keyPassword parameters can be the same. For example, the following image shows
the keystore configuration section of the cfg/ddm.security file:
Keystore Configuration 31
The following table describes the keystore properties that you configure in the keystore section of the cfg/
ddm.security file:
provider Name of the specific security provider that works with the No
keystore.
preferred If set to "true," Dynamic Data Masking loads the preferred No false
keystore first, before any other keystores.
At run time, Dynamic Data Masking loads keystores and searches aliases in the following order:
32 Chapter 3: Security
Note: the key password javax.net.ssl.keyPassword is not a standard Java Virtual Machine option.
3. A keystore of the Java Virtual Machine, if any.
4. Other keystores specified in the cfg/ddm.security file.
You can configure Dynamic Data Masking to read database credentials and signed certificates from the
same custom keystore.
If you configure a custom security provider, for example CyberArk, to work with the keystore, the security
provider must support reading key entries from the keystore. Otherwise, Dynamic Data Masking is not able to
get signed certificates from the keystore to perform the SSL handshake with the clients.
If you use one key password to create all key entries in the keystore, Dynamic Data Masking can load all
signed certificates at run time. Otherwise, if you use different key passwords to create key entries, Dynamic
Data Masking will reads as many key entries as possible using one specified key password and skips key
entries with another key password.
If you do not want to mistakenly store database credentials and signed certificates in the same keystore, do
not specify the <storeName> parameter in the keystore configuration section of the cfg/ddm.security file. If
you do not specify the <storeName> parameter, that store is unavailable for selection when you configure a
database in the Add Database form.
Dynamic Data Masking internally manages the default keystore file, cfg/ddm.jceks. Do not change the
default keystore, for example by importing certificates to the default keystore. Do not specify the default
keystore in the cfg/ddm.security file.
Truststore Configuration
To configure SSL communication in Dynamic Data Masking, copy truststores from the database client to the
Dynamic Data Masking Server. Usually you can copy the client truststores without changing them. After you
copy the truststores to the Dynamic Data Masking Server installation, configure the cfg/ddm.security file to
set the truststore parameters.
If necessary, you can configure the trust strategy to accept new database certificates at run time, and
permanently store them in the preferred truststore.
The following table describes the truststore properties that you configure in the truststore section of the
cfg/ddm.security file:
provider Name of the specific security provider that works with the No
truststore.
Truststore Configuration 33
Parameter Description Required Default
Name Value
preferred If set to "true," Dynamic Data Masking loads the preferred No false
truststore first, before any other truststores.
Dynamic Data masking can add new certificates to the preferred
truststore at run time. If you do not set a preferred truststore,
Dynamic Data Masking might use new accepted certificates for
the duration of current session. For more information, see “Trust
Strategies” on page 36.
You can set one preferred truststore in the cfg/ddm.security
file.
At run time, Dynamic Data Masking loads truststores and checks trusted certificates in the following order:
If you do not configure the Dynamic Data Masking administrative tools with truststores in the file cfg/
client.security, the administrative tools accept any signed certificate that the Dynamic Data Masking
Server provides.
Note that JDBC database drivers can use only one truststore with all public certificates which is set in the
Java Virtual Machine system properties.
The Dynamic Data Masking Server automatically generates a composite temporary truststore file named
cfg/ddm.temp.jceks.
This truststore is used to discover metadata in SSL-enabled databases and perform impersonation in
Dynamic Data Masking. It is also used to test the connection to SSL-enabled databases through the Add
Database form.
The composite truststore file contains entries added from all configured truststores, including the truststore
defined in the Java Virtual Machine system properties. That composite truststore is set, and can override an
existing truststore, in the Java Virtual Machine system properties. The Dynamic Data Masking Server always
creates a new temporary composite truststore at startup for the current Dynamic Data Masking Server
session. The Dynamic Data Masking Server also deletes the old temporary composite truststore, if it remains
after a previous session.
The temporary composite truststore is used to support the functionality of JDBC drivers in the Java Virtual
Machine. If you have not configured the Dynamic Data Masking Server to work with SSL-enabled databases,
the Dynamic Data Masking Sever does not generate the composite truststore.
34 Chapter 3: Security
Key Strategies
If Dynamic Data Masking uses multiple signed certificates to perform the handshake with database clients,
you must configure a simple port strategy or an advanced port strategy in the cfg/ddm.security file.
By default, the standard Java implementation chooses the first alias it finds in the keystore for which there is
a private key and a key of the right type for the chosen cipher suite. The selected alias does not necessarily
correspond to the requested domain, which can cause certificate errors.
To overcome this limitation, you can configure a simple port strategy or an advanced port strategy that map
each SSL host and port combination in Dynamic Data Masking to a specific alias. Each alias corresponds to a
key entry with a private key and signed certificate in the keystore that Dynamic Data Masking uses for the
SSL handshake with the client.
The following image shows the keystrategies configuration section of the cfg/ddm.security file:
Alias names must be unique for all keystores. If you do not give unique alias names, Dynamic Data Masking
uses the key entry with the first found matching alias to handshake with the client.
You do not need to configure the keystrategy section when you have not configured the Dynamic Data
Masking Server with a keystore, or when a keystore configured in the Dynamic Data Masking Server contains
only one signed certificate.
If you have not configured the Dynamic Data Masking Server with a keystore, Dynamic Data Masking uses the
self-signed certificate generated in the default Dynamic Data Masking keystore file cfg/ddm.jceks. This
certificate is associated with the reversed alias name ddm_self_signed. Do not use this alias name in any
other keystores.
In this scenario, the Dynamic Data Masking Server provides the self-signed certificate to the client.
If you have configured the Dynamic Data Masking Server with one keystore, defined in the file jvm.params or
cfg/ddm.security, that contains only one signed certificate, Dynamic Data Masking uses that certificate.
Key Strategies 35
The following table describes the keystore strategy requirements for both the Dynamic Data Masking Server
and the administrative tools:
Keystore or Dynanic Data Masking Server SSL Truststore or Dynamic Data Masking
Keystores Functionality Truststores Administrative Tools SSL
Functionality
not set Provides the self-signed certificate to the not set Accepts any certificate. Can
SSL client. The key strategy is not relevant. connect.
set with Provides to SSL client: not set Accepts any certificate. Can
certificates - If you have not configured a port strategy, connect.
provides the first found signed certificate.
- If you have configured a port strategy,
provides a signed certificate associated
with an alias mapped to a specific SSL
port.
not set Provides the self-signed certificate to SSL set with Cannot connect, because the
client. Key strategy is not relevant. certificates Dynamic Data Masking
Server has no matching
private key.
set with Must have a configured port strategy. set with Can connect, but the
certificates Provides to SSL client a signed certificate certificates Dynamic Data Masking
associated with an alias mapped to a specific Server must have matching
SSL port. private key.
Trust Strategies
A trust strategy tells Dynamic Data Masking how to handle a certificate that does not exist in the Dynamic
Data Masking truststore and is therefore rejected by the trust manager.
When Dynamic Data Masking fails to validate a specific certificate, it consults the configured trust strategy, if
the strategy trusts the certificate chain. If a trust strategy determines that it trusts the certificate, it returns
an "accept" value to the trust manager. If the trust strategy accepts the certificate and returns the
"accept_permanently" or "accept_temporarily" value to the trust manager, Dynamic Data Masking adds the
certificate to the preferred or temporary truststore. Dynamic Data Masking then validates the certificate
chain again with the likely positive result.
If the trust strategy determines that it cannot trust a certificate, it returns a "reject" value to the trust
manager. If all configured trust strategies reject the certificate and return a "reject" value to the trust
manager, Dynamic Data Masking denies the client connection.
Trust strategies allow Dynamic Data Masking to add accepted certificates to the Dynamic Data Masking
truststore at run time, similar to web browsers. For example, if a database uses a self-signed certificate to
perform the SSL handshake with the client, you can configure the self-signed strategy in Dynamic Data
Masking to accept all self-signed certificates. Dynamic Data Masking adds those certificates to the
truststore. Similarly, you can use different strategies to accept and store new certificates in the truststore
without any manual action.
36 Chapter 3: Security
Dynamic Data Masking is installed with two trust strategies that you can configure in the cfg/ddm.security
file:
The following image shows a configuration to reject all self-signed certificates and permanently accept all
other certificates:
The final result of trust strategy processing is a combination of the following steps:
Trust Strategies 37
The following table describes when Dynamic Data Masking stores an accepted certificate in either the
preferred or temporary truststore, based on the accept value of the trust strategy and the preferred value of
the truststore:
set Accept Allow Dynamic Data Masking permanently stores the new
permanently certificate and uses it for the duration of the current
and all future sessions.
set Accept temporarily Allow Dynamic Data Masking temporarily accepts the new
certificate and uses it for the duration of the current
session.
not set Accept Allow Dynamic Data Masking temporarily accepts the new
permanently certificate and uses it for the duration of the current
session.
not set Accept temporarily Allow Dynamic Data Masking temporarily accepts the new
certificate and uses it for the duration of the current
session.
The following image shows a configuration of the two trust strategies in the cfg/ddm.security file:
38 Chapter 3: Security
Protocols and Ciphers Suites
To improve network security in Dynamic Data Masking, you can set stronger security protocols and cipher
suites in the cfg/ddm.security file. Stronger security protocols might help prevent multiple types of
malicious attacks, such as a man-in-the-middle attack. You can define global settings for protocols and
cipher suites, or you can configure protocols and ciphers that map to a specific Dynamic Data Masking host
and port.
Global Configuration
You can define protocols and ciphers as global settings that apply to all SSL ports in Dynamic Data Masking.
The following image shows an example configuration in the cfg/ddm.security file for:
• TLSv1.2 communication between clients and Dynamic Data Masking, and communication between
Dynamic Data Masking and databases
• Specification of two cipher suites for message encryption
In this example you have manually disabled the TLSv1.1 protocol, because TLSv1.1 does not support the
specified ciphers suites. Otherwise, Dynamic Data Masking disables protocols that do not support any of the
listed ciphers.
When you start Dynamic Data Masking, it logs a warning about unsupported protocols and automatically
disabled protocols. To print a list of global protocols and cipher suites that Dynamic Data Masking uses at
run-time, run the following server network commands:
If a database client does not support protocols and ciphers that you configure in the cfg/ddm.security file,
the client cannot connect to Dynamic Data Masking or any databases through Dynamic Data Masking. In this
case, you must configure the client software with the appropriate protocols and ciphers.
Port Configuration
You can configure an advanced port strategy that maps SSL-enabled host and port values to an alias in
Dynamic Data Masking, in addition to optional security protocol and cipher suites. Configuring a port strategy
The following image shows an example of advanced port configuration in the cfg/ddm.security file:
In this example, TLSv1.2 communication is allowed only on the port 1535, and two cipher suites are defined.
You can configure as many strategies as required in the cfg/ddm.security file. However, do not specify each
host and port more than once. Dynamic Data Masking uses the first found entry with a matching [host:]port
entry to configure the SSL port and find the matching signed certificate to perform the SSL handshake with
the client.
Protocols and cipher suites defined in an advanced port strategy might override any global settings. The
following table shows example configuration settings and the corresponding run-time behavior:
Global Strategy on Port 1535 Port to Communicate With Port to Communicate With
40 Chapter 3: Security
Scenario Configured Protocols Runtime Protocols
To print a list of configured protocols and cipher suites for a specific SSL host and port, run the following
server network commands:
If you make changes to the cfg/ddm.security file, run the following command to re-load the security
configuration without restarting the Dynamic Data Masking Server:
For more information on Kerberos authentication for Hive and Impala, see the H2L "Enabling Kerberos for
Hive and Impala Databases in Dynamic Data Masking."
42 Chapter 3: Security
2. Copy the krb5.conf file and the keytab file for the Dynamic Data Masking service principal to the
Dynamic Data Masking Server machine.
3. If you have not already created an XML ddm.security configuration file, create the file in the following
location: <DDM>/cfg/ddm.security
4. Configure the ddm.security file as shown in the example below:
<XML>
<kdc>/etc/krb5.conf</kdc>
<jaasConfig type="ArrayList">
<entry type="JaasDescriptor">
<fqcn>com.activebase.security.jaas.JaasProcessorImpl</fqcn>
<configuration>
<jaasEntries type="HashMap">
<entry>
<key>default</key>
<value type="ArrayList">
<entry type="HashMap">
<entry>
<key>moduleClass</key>
<value>com.sun.security.auth.module.Krb5LoginModule</value>
</entry>
<entry>
<key>moduleFlag</key>
<value>required</value>
</entry>
<entry>
<key>options</key>
<value type="HashMap">
<entry>
<key>principal</key>
<value>ddmserver/ddmhost@realm.com</value>
</entry>
<entry>
<key>keyTab</key>
<value>cfg/ddmService.keytab</value>
</entry>
</value>
</entry>
</entry>
</value>
</entry>
</jaasEntries>
</configuration>
</entry>
</jaasConfig>
</XML>
For information about enabling Kerberos for Hive, see the H2L "Enabling Kerberos for Hive and Impala
Databases in Dynamic Data Masking."
1. For the value of the principal key, specify the Dynamic Data Masking SPN.
For example: ddmserver/ddmhost@realm.com
2. For the value of the keyTab key, specify the keytab file name with path.
For example: cfg/ddmService.keytab
3. If Hive is configured with auth-conf, define the port strategy for the service:
• Key is the port number of Dynamic Data Masking for the Hive service.
• Value is a map with the key as sasl.qop and the value as auth-conf.
For example, the following ddm.security file is configured for the Dynamic Data Masking for Hive
service running on port 10081:
<XML>
<kdc>/etc/krb5.conf</kdc>
<jaasConfig type="ArrayList">
<entry type="JaasDescriptor">
<configuration>
<jaasEntries type="HashMap">
<entry>
<value type="ArrayList">
<entry type="HashMap">
<entry>
<value>com.sun.security.auth.module.Krb5LoginModule</value>
<key>moduleClass</key>
</entry>
<entry>
<value>required</value>
<key>moduleFlag</key>
</entry>
<entry>
<value type="HashMap">
<entry>
<value>ddmserver/ddmhost@realm.com</value>
<key>principal</key>
</entry>
<entry>
<value>cfg/ddmService.keytab</value>
<key>keyTab</key>
</entry>
<entry>
<value>false</value>
<key>debug</key>
</entry>
</value>
<key>options</key>
</entry>
</entry>
</value>
<key>default</key>
</entry>
44 Chapter 3: Security
</jaasEntries>
</configuration>
<fqcn>com.activebase.security.jaas.JaasProcessorImpl</fqcn>
</entry>
</jaasConfig>
<serviceStrategies type="ArrayList">
<entry type="StrategyDescriptor">
<fqcn>com.activebase.security.service.strategies.PortStrategy</fqcn>
<configuration>
<ports type="HashMap">
<entry>
<key>10081</key>
<value type="HashMap">
<entry>
<key>sasl.qop</key>
<value>auth-conf</value>
</entry>
</value>
</entry>
</ports>
</configuration>
</entry>
</serviceStrategies>
</XML>
Client Configuration
After you configure the ddm.security file, configure the client.
You can connect to the client using a direct connection or you can connect to the client through Dynamic
Data Masking. In the client connection URL, add the property SaslQOP using one of the following formats:
If you are connecting to the client directly, use the following format:
If you are connecting to the client through Dynamic Data Masking, use the following format:
The following table describes the JAAS parameters supported in the cfg/ddm.security file:
Option Description
KDC The path to the configuration file that provides details of the Kerberos key distribution center.
JaasDescriptor The JAAS descriptor that provides JAAS processor and configuration entries for the processor.
fqcn The full-qualified class name of the JAAS processor implementation. Dynamic Data Masking
provides one implementation:
com.activebase.security.jaas.JaasProcessorImpl
key The name of the configuration entry. Dynamic Data Masking supports one mandatory entry
called "default."
value The list that contains the map of login modules and their configuration parameters.
value (moduleClass) The fully-qualified class name of the login module implementation. Dynamic Data Masking
supports two implementations:
1. com.sun.security.auth.module.Krb5LoginModule
2. com.ibm.security.auth.module.Krb5LoginModule
46 Chapter 3: Security
Option Description
value (moduleFlag) Java supports the following standard options: REQUIRED, REQUISITE, SUFFICIENT, and
OPTIONAL.
options The configuration options of the login module as a map of the option name and the
correspondingvalue.
Dynamic Data Masking supports the following configuration options of the
com.sun.security.auth.module.Krb5LoginModule login module:
- principal
- keyTab
- useKeyTab
- storeKey
- doNotPrompt
- isInitiator
- useTicketCache
- refreshKrb5Config
- renewTGT
- storePass
- clearPass
- useFirstPass
- debug
For more information on these options, refer to the following Oracle documentation:
https://docs.oracle.com/javase/8/docs/jre/api/security/jaas/spec/com/sun/security/auth/
module/Krb5LoginModule.html
Dynamic Data Masking supports the following configuration options of the
com.ibm.security.auth.module.Krb5LoginModule login module:
- KRB5CCNAME
- principal
- UseDefaultCcache
- ticketcache
- credsType
- both
For more information on these options, refer to the following IBM documentation:
https://www.ibm.com/support/knowledgecenter/en/SSYKE2_6.0.0/
com.ibm.java.security.component.60.doc/security-component/jgssDocs/jaas_login_user.html
Connection Management
This chapter includes the following topics:
49
Configuring the Target Database
Define the database parameters that the Dynamic Data Masking service uses to connect to the target
database. The target database configuration specifies the database, user account, and connection settings.
Testing a Connection
When you test a connection, the Dynamic Data Masking service validates the database configuration and
connection information.
Define all connection parameters before you test a database connection. If the Dynamic Data Masking
service cannot connect to the database, check with the database administrator to verify the database
configuration information.
1. Select a database node in the Management Console tree and click Tree > Edit.
The Edit window appears.
2. Click Test Connection.
The Management Console sends a request with the database object to the Dynamic Data Masking
Server. The Dynamic Data Masking Server reads the database object from the request, and tests
connection to that database. A confirmation message appears if the connection is valid. If the
connection fails, an error message appears.
You can enable SSL communication between the Data Vault and the Dynamic Data Masking Server. For more
information about enabling SSL communication, see the "Security" chapter. Communication is encrypted
between the Dynamic Data Masking FAS service and the Data Vault, and between the Dynamic Data Masking
FAS service and the Data Vault client. For information on how to enable SSL in the Data Vault, see the Data
Vault Administrator Guide. To use SSL communication, you must have Data Archive version 6.4.3 or later
installed.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Name for the database node that appears in the Management Console tree.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the Data Vault Server and port number.
Server Port
Keystore
Select Custom if you have configured a custom keystore. Select Default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the Data Vault user account to log in to the Data Vault. The Data Vault user must have the
SELECT access privilege for all the tables to which the client user has the SELECT access privilege. This
parameter is valid for the default keystore.
DBA Password
Password for the Data Vault user. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined when the
CyberArk account was created. This parameter is valid for custom keystores.
SSL
Select to enable SSL communication between the database and the Dynamic Data Masking Server. For
more information on SSL configuration, see Chapter 3, “Security” on page 25.
You can use the Generic Database node for databases that do not have a dedicated database node. For
example, use the Generic Database node with a Netezza or Greenplum database.
Create a connection rule that uses the Switch to Database rule action to define the target database. Specify a
database in the rule that corresponds to the Dynamic Data Masking Database Name parameter.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
DSN Name
Unary Table
Select if the database allows parentheses in a function call that does not have any arguments.
The SQL command that Dynamic Data Masking uses to retrieve the data required for impersonation.
For example, you might enter the following command to retrieve the search_path for a Greenplum
database:
SELECT COALESCE(substring( useconfig[1] from '%=#"%#"' for '#' ),
substring( datconfig[1] from '%=#"%#"' for '#'), 'PUBLIC') as AUTH_CURRENT_SCHEMA
Select to indicate that Dynamic Data Masking must retrieve the data for impersonation for each request.
If the check box is unchecked, Dynamic Data Masking retrieves the data once per session.
Impersonation Commands
Impersonation commands for the database, separated by a semicolon (;) and a line break. Words that
are preceded by a backslash and left parenthesis and followed by a right parenthesis are Dynamic Data
Masking symbols that Dynamic Data Masking replaces with symbol values. For example, \(SYMBOL).
Note: For a Greenplum database, you must set the AUTH_CURRENT_SCHEMA symbol. You can use the
following command to set the symbol:
SET SEARCH_PATH = \(AUTH_CURRENT_SCHEMA)
Cleanup Commands
Cleanup commands for the database, separated by a semicolon (;) and a line break.
Sanity check script to verify that the Dynamic Data Masking connection to the database is valid.
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
Username for the database user account to log in to the database. The database user must be a
privileged user that has SELECT access to all the tables that the client user has SELECT access to.
DBA Password
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
Greenplum Connections
Configure database privileges and impersonation and cleanup commands for a Greenplum database.
The required user privileges and commands for impersonation and cleanup vary based on the database and
database version. The following privileges and commands are examples that work for some versions of the
database.
Impersonation Commands
You might enter the following commands in the Impersonation Commands text box for a Greenplum
database:
SET ROLE \(AUTH_USERNAME)
SET SEARCH_PATH = \(AUTH_CURRENT_SCHEMA)
Cleanup Commands
You might enter the following commands in the Cleanup Commands text box for a Greenplum database:
RESET ROLE;
RESET SEARCH_PATH;
MySQL Connections
Configure database privileges and impersonation and cleanup commands for a MySQL database.
The required user privileges and commands for impersonation and cleanup vary based on the database and
database version. The following privileges and commands are examples that work for some versions of the
database.
Impersonation Commands
You might enter the following command in the Impersonation Commands text box for a MySQL database:
USE \(AUTH_CATALOG)
Cleanup Commands
You might enter the following command in the Cleanup Commands text box for a MySQL database:
USE \(ADMIN_CATALOG)
Netezza Connections
Configure database privileges and impersonation and cleanup commands for a Netezza database.
The required user privileges and commands for impersonation and cleanup vary based on the database and
database version. The following privileges and commands are examples that work for some versions of the
database.
Impersonation Commands
You might enter the following commands in the Impersonation Commands text box for a Netezza database:
SET CATALOG \(AUTH_CATALOG)
SET SCHEMA \(AUTH_CURRENT_SCHEMA)
EXECUTE AS \(AUTH_USERNAME)
Cleanup Commands
You might enter the following commands in the Cleanup Commands text box for a Netezza database:
SET CATALOG \(ADMIN_CATALOG)
REVERT
You must configure Dynamic Data Masking to direct the database request to the predefined rule set before
the request goes to the user-defined masking rules.
Dynamic Data Masking includes predefined rule sets for the following databases:
Greenplum
You can find the predefined security rule set for a Greenplum database in the following location:
<Dynamic Data Masking installation>\Wrappers\ImpersonationRules\GreenplumRS.xml
MySQL
You can find the predefined security rule set for a MySQL database in the following location:
<Dynamic Data Masking installation>\Wrappers\ImpersonationRules\MySQL.xml
Netezza
You can find the predefined security rule set for a Netezza database in the following location:
<Dynamic Data Masking installation>\Wrappers\ImpersonationRules\Netezza.xml
If you execute commands that alter the user context, you might want Dynamic Data Masking to skip one or
more of the impersonation commands. To skip an impersonation command, create a security rule that sets
the symbol value to DDM_SYSTEM_COMMAND1. Dynamic Data Masking skips, and does not execute, any
impersonation command that uses a symbol whose value is DDM_SYSTEM_COMMAND1. For example, on a
Netezza database, if you want to skip the command that uses the AUTH_CURRENT_SCHEMA symbol, you
would create a rule with the following rule action:
You can use the Hive database type to access Hadoop-compatible file systems. To connect to multiple Hive
databases, create database nodes in the Management Console and enter different sets of driver files in the
classpath parameter.
Name for the database that appears in the Management Console tree.
Driver Classpath
Classpath of the Hive database driver files on the Dynamic Data Masking Server. Use semicolons to
separate multiple classpaths.
You can use an asterisk (*) to indicate all the jar files in a directory. Dynamic Data Masking ignores files
in the directory that are not jar files. For example, you might enter the following location for Windows:
C:\JDBC_Drivers\Hive\hive\*;C:\JDBC_Drivers\Hive\hadoop\*
Driver Class Name
Fully qualified class name of the Impala driver specified in the Driver Classpath property.
If the Hive database has Kerberos authentication enabled, the URL properties auth and
kerberosAuthType are mandatory.
When you connect to a Hive database with Kerberos authentication enabled, specify the server principal
of Hive, even though it is included in the JDBC URL. Otherwise, leave this parameter blank.
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
Optional user name for the database user account to log in to the Hive database. The database user
must be a privileged user that has SELECT access to all the tables that the client user has SELECT
access to. This parameter is valid for the default keystore.
Leave this parameter blank if the Hive database has Kerberos authentication enabled.
DBA Password
Optional password for the database user. This parameter is valid for the default keystore.
Leave this parameter blank if the Hive database has Kerberos authentication enabled.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
The IBM Db2 connection request does not contain information about the database. You must define the
target database that Dynamic Data Masking forwards the request to. Make a connection rule that uses the
Switch to Database rule action to define the target database. Specify a database in the rule that corresponds
to the Dynamic Data Masking Database Name parameter.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Server host name or TCP/IP address for the IBM Db2 database.
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
Optional Parameters
For example, if the IBM Db2 database is configured with the SERVER_ENCRYPT authentication method,
you might enter the following parameter:
AuthenticationMethod=encryptedUIDPassword
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the IBM Db2 database. The database user must be
a privileged user that has SELECT access to all the tables that the client user has SELECT access to.
This parameter is valid for the default keystore.
DBA Password
Password for the database user. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
SSL
Select to enable SSL communication between the database and the Dynamic Data Masking Server. For
more information on SSL configuration, see the chapter "Security."
Use the IBM Db2 Control Center to create a privileged database user, <DDM Admin>, that corresponds to an
administrator user on your operating system or a standard user on your operating system.
If <DDM Admin> corresponds to an administrator user on your operating system, you do not need to grant the
user additional privileges.
If <DDM Admin> corresponds to a standard user on your operating system, the user must have SYSMON
authorization or higher. If you use the encrypted password option, you must also run the following
commands:
Create a security rule with the following properties to set the current schema:
Matcher
Select the Text matcher. In the Text matcher, select the Regular Expression Identification Method.
You can use the Impala database type to access Hadoop-compatible file systems.
Use Test Connection to verify that the Dynamic Data Masking service can access the database.
Name for the database that appears in the Management Console tree.
Driver Classpath
Classpath of the Impala database driver files on the Dynamic Data Masking Server. Use semicolons to
separate multiple classpaths.
You can use an asterisk (*) to indicate all the jar files in a directory. Dynamic Data Masking ignores files
in the directory that are not jar files.
Fully qualified class name of the Hive driver specified in the Driver Classpath property.
Example:
org.apache.hive.jdbc.HiveDriver
Connection String (URL)
If the Impala database has Kerberos authentication enabled and the Hive driver is used to connect to
Impala, the URL properties auth and kerberosAuthType are mandatory.
Example: jdbc:hive2://impalaserver:21050/default;principal=impala/
impalaserver@realm.com;auth=kerberos;kerberosAuthType=fromSubject
When you connect to an Impala database with Kerberos authentication enabled, specify the server
principal of Impala, even though it is included in the JDBC URL. Otherwise, leave this parameter blank.
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore that is preconfigured for use with Dynamic Data Masking.
Name of the custom keystore that you defined in the ddm.security file. This parameter is valid for
custom keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined when the
CyberArk account was created. This parameter is valid for custom keystores.
DBA Username
Optional user name for the database user account to log in to the Impala database. The database user
must have SELECT access privileges for all the tables to which the client user has SELECT access
privileges. This parameter is valid for the default keystore.
Leave this parameter blank if the Impala database has Kerberos authentication enabled.
DBA Password
Optional password for the database user. This parameter is valid for the default keystore.
Leave this parameter blank if the Impala database has Kerberos authentication enabled.
The Informix connection request does not contain information about the database. You must define the
target database that Dynamic Data Masking forwards the request to. Make a connection rule that uses the
Switch to Database rule action to define the target database. Specify a database in the rule that corresponds
to the Dynamic Data Masking Database Name parameter.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
TCP/IP port of the listener receiving requests in Informix native protocol. The DDM for Informix service
uses this port to communicate with the Informix database.
DRDA Port
TCP/IP port of the listener receiving requests in Informix DRDA protocol. The DDM for Informix (DRDA)
service uses this port to communicate with the Informix database.
Keystore
Select Custom if you have configured a custom keystore. Select Default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the Informix database. The database user must
have SELECT access privileges for all the tables to which the client user has SELECT access privileges.
This parameter is valid for the default keystore.
DBA Password
Password for the database user. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
If you add the AD user as the DDM administrator, set the following property:
authenticationMethod=ActiveDirectoryPassword
Keystore
Select Custom if you have configured a custom keystore. Select Default if you want to use the default
keystore that is preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to Microsoft Azure SQL Database. The database user
must have SELECT access privileges for all the tables to which the client user has SELECT access
privileges. This parameter is valid for the default keystore.
DBA Password
Password for the database user. This parameter is valid for the default keystore.
KeyStore Name
Name of the custom keystore defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined when the
CyberArk account was created. This parameter is valid for custom keystores.
SSL
Select to enable SSL communication between the database and the Dynamic Data Masking Server. For
more information on SSL configuration, see Chapter 3, “Security” on page 25.
If you have both Microsoft SQL Server authentication-based users and Active Directory authentication-based
users, then use the Active Directory user as the Dynamic Data Masking administrator.
Note: Dynamic Data Masking allows you to log in to Microsoft Azure SQL Database through the following
authentication methods:
Note: If you have both Microsoft SQL Server authentication-based users and Active Directory authentication-
based users, then use the Active Directory user as the Dynamic Data Masking administrator.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Server host name or TCP/IP address for the Microsoft SQL Server database.
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
If the Microsoft SQL Server database is configured to use dynamic port allocation, you can enter the
Server Instance Name to identify the listener port. If you enter the instance name, you do not need to
enter a Server Port number.
Server Port
TCP/IP listener port for the Microsoft SQL Server database. If you enter the Server Port number, you do
not need to enter a Service Instance Name.
Optional Parameters
Additional parameters for the Informatica driver for Microsoft SQL Server.
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the Microsoft SQL Server database. The database
user must be a privileged user that has SELECT access to all the tables that the client user has SELECT
access to. This parameter is valid for the default keystore.
DBA Password
Password for the database user. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
SSL
Select to enable SSL communication between the database and the Dynamic Data Masking Server. For
more information on SSL configuration, see the chapter "Security."
• USE master;
• CREATE LOGIN <DDM Admin> WITH PASSWORD=<DDM Admin password>, DEFAULT_DATABASE = <default
database>;
• GRANT CONTROL SERVER TO <DDM Admin>;
• USE <default database>;
• CREATE USER <database user> FOR LOGIN <DDM Admin>;
To use a Microsoft SQL Server connection to access Netezza, you must configure a server link between a
Microsoft SQL Server database and the Netezza database. The client tool or application that you use to
access the database must support Microsoft SQL Server.
When you send a request to the Microsoft SQL Server database, the request passes through the Dynamic
Data Masking Server, which alters the request. The Dynamic Data Masking Server applies masking and
auditing rules and uses OpenQuery to direct the request through the Microsoft SQL Server database to the
Netezza database. The Netezza database returns masked data through the Microsoft SQL Server database to
the Dynamic Data Masking Server.
To connect to a Netezza database, create a Microsoft SQL Server connection node in the Management
Console.
If the Dynamic Data Masking service runs on the Oracle database server, you must switch the Oracle listener
to a hidden port. Edit the listener.ora file to change the Oracle listener to a port that is not in use. When
you change the Oracle listener to a hidden port, applications connect to the Dynamic Data Masking listener
port instead of the database.
To route applications to the Dynamic Data Masking listener port, you must edit tnsnames.ora to add a
database alias to tnsnames.ora for the Dynamic Data Masking service. Dynamic Data Masking uses the
database alias to listen to incoming connections to the database.
Click Test Connection to verify that the Dynamic Data Masking service can access the database. If a
database defines multiple instances, Test Connection validates each Oracle instance cyclically. The test
connection verifies each Oracle instance.
Instance Name
Listener Address
Listener Port
Service Name
Service name for the target database. Dynamic Data Masking determines the target database based on
the service name or SID in the client connection request.
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the Oracle database. The database user must be a
privileged user that has SELECT access to all the tables that the client user has SELECT access to. This
parameter is valid for the default keystore.
DBA Password
Password for database user. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
SSL
Select to enable SSL communication between the database and the Dynamic Data Masking Server. For
more information on SSL configuration, see the chapter "Security."
Run the following database commands to create a Dynamic Data Masking administrator user and grant the
required privileges:
Using DBLink
If you use DBLink to access the Oracle database, you must set DBLink to PUBLIC. If DBLink is not set to
PUBLIC, Dynamic Data Masking will not be able to access the database.
Before you change the tnsnames.ora file, back up the original copy.
1. Open tnsnames.ora. By default, this file is located in the following location: <Oracle install
directory>/app/oracle/product/<product version>/server/NETWORK/ADMIN
2. Find the following entry in the tnsnames.ora file:
DBNAME=(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=dbServer)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=prod.mycompany.com)))
3. Replace the service listener port and host with the Dynamic Data Masking host and listener port.
The following tnsnames.ora file shows the entry updated with the updated host and listener port.
DBNAME=(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=DynamicDataMasking)(PORT=1525))
(CONNECT_DATA=(SERVICE_NAME=prod.mycompany.com)))
Create a security rule with the following properties to set the current schema:
Matcher
Select the Text matcher. In the Text matcher, select the Regular Expression Identification Method.
Note: For alternative syntaxes of SET SCHEMA, you can change the regular expression in the Text matcher
based on the SQL syntax option that you use for the SET SCHEMA command.
Use Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
Optional Parameters
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore that is preconfigured for use with Dynamic Data Masking.
User name for the database user account to log in to the PostgreSQL database. The database user must
have SELECT access privileges for all the tables to which the client user has SELECT access privileges.
This parameter is valid for the default keystore.
DBA Password
Password for the database user. This parameter is valid for the default keystore.
Keystore Name
Name of the custom keystore defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined when the
CyberArk account was created. This parameter is valid for custom keystores.
• rolsuper
• rolinherit
• rolcanlogin
Create a security rule with the following properties to set the current schema:
The following image shows an example definition of the CURRENT_PATH symbol on the Edit Rule page:
The Sybase connection request does not contain information about the database. You must define the target
database that Dynamic Data Masking forwards the request to. Make a connection rule that uses the Switch to
Database rule action to define the target database. Specify a database in the rule that corresponds to the
Dynamic Data Masking Database Name parameter.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
Optional Parameters
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the Sybase database. The database user must be a
privileged user that has SELECT access to all the tables that the client user has SELECT access to. This
parameter is valid for the default keystore.
DBA Password
Password for the database user account. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
Log in to a Sybase client as an administrator that is not the Dynamic Data Masking administrator and run the
following commands:
If the quoted_identifier option is set to off or the query contains a table name with aliases, a query sent to the
database through Dynamic Data Masking returns an error. You must create a security rule that removes
double quotes from the query.
Queries to the database must use a masking rule with the following parameters:
Rule Name
Matcher
Select the Any matcher. The rule applies to all queries sent to the database.
Rule Action
Select the Search and Replace rule action. In the Search Text field, enter a double quote ("). Leave the
Replacement String field blank. In the query, the Rule Engine removes double quotes.
Select the Log When Rule is Applied check box to create a line in the rule.log file when the Rule Engine
applies the rule.
The Teradata connection request does not contain information about the database. You must define the
target database that Dynamic Data Masking forwards the request to. Make a connection rule that uses the
Switch to Database rule action to define the target database. Specify a database in the rule that corresponds
to the Dynamic Data Masking Database Name parameter.
Click Test Connection to verify that the Dynamic Data Masking service can access the database.
Server Address
Note: Verify that there is no firewall that prohibits the Dynamic Data Masking Server from connecting to
the database server and port number.
Server Port
Keystore
Select custom if you have configured a custom keystore. Select default if you want to use the default
keystore preconfigured for use with Dynamic Data Masking.
DBA Username
User name for the database user account to log in to the Teradata database. The database user must be
a privileged user that has SELECT access to all the tables that the client user has SELECT access to.
This parameter is valid for the default keystore.
DBA Password
Password for the database user account. This parameter is valid for the default keystore.
Name of the custom keystore, defined in the ddm.security file. This parameter is valid for custom
keystores.
Alias
Alias name for the custom keystore. For CyberArk accounts, the alias name was defined during creation
of the CyberArk account. This parameter is valid for custom keystores.
The Dynamic Data Masking administrator must have the following privileges to run SELECT * statements:
• The administrator must be able to make the client catalog the default catalog. The administrator must
also be able to make the client catalog the default catalog when the client's default catalog is set within
the client's profile.
• The administrator must have all the SELECT and EXECUTE grants, or roles that contain the grants, that the
client user has on objects that you want to mask.
• If the client session is set with query banding and one of the query bands is ProxyUser, the Dynamic Data
Masking administrator must have the same CONNECT THROUGH grants as the client user.
1. Download the Teradata drivers from the Teradata website. You need the following drivers:
• tdgssconfig.jar
• terajdbc4.jar
2. Stop the Dynamic Data Masking Server Windows service.
3. Open a Server Control window and run the following commands:
• server stop
• server remove
4. Save the Teradata drivers in the following directory:
<Dynamic Data Masking installation>\lib\ext
5. From Server Control, run the following command:
server start
The Dynamic Data Masking installation contains a JAR file that you must save on the client machine. The
JAR file contains a Java agent, a transformer that intercepts the method calls of JDBC objects, and proxy
classes for JDBC classes. You can find the JAR file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
You must perform additional configuration steps based on the client and operating system.
Note: Because the client looks for the Dynamic Data Masking service on the host and port, the client
connection fails when the Dynamic Data Masking Server is down. To connect to the database, the Dynamic
Data Masking Server must be running.
79
Apache Tomcat Configuration
Follow the configuration steps for the Apache Tomcat client based on the operating system.
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
<Apache Tomcat installation>\lib
3. Find the catalina.bat file. You can find the file in the following directory:
<Apache Tomcat installation>\bin\catalina.bat
4. Save a backup of the catalina.bat file.
5. Open the catalina.bat file in a text editor and find the setlocal line. To append the -javaagent argument
to the Java command line, enter one of the following options under the setlocal line:
• If you want to enable logging on the client machine, enter the following text:
set JAVA_OPTS=- -DDDM_GENJDBC_LOG = <config file location and name> -
javaagent:..\lib\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
set JAVA_OPTS=-javaagent:..\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
6. To set the classpath, add the following line under the line that you added in the previous step:
set CLASSPATH=%CLASSPATH%;..\lib\GenericJDBC.jar;
7. Save catalina.bat.
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>/Wrappers/jdbc/GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
<Apache Tomcat installation>/lib
3. Find the catalina.sh file. You can find the file in the following directory:
<Apache Tomcat installation>/bin/catalina.bat
4. Save a backup of the catalina.sh file.
5. Open the catalina.sh file in a text editor and find the line that contains #JAVA_OPTS=. To append the -
javaagent argument to the Java command line, enter one of the following options under the line that
contains #JAVA_OPTS=:
• If you want to enable logging on the client machine, enter the following text:
export JAVA_OPTS=- -DDDM_GENJDBC_LOG = <config file location and name> -
javaagent:..\lib\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
export JAVA_OPTS=-javaagent:../lib/
GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
<Aqua Data Studio installation>\lib
3. If you want to use datastudio.bat to launch Aqua Data Studio, complete the following steps:
a. Find the datastudio.bat file. You can find the file in the Aqua Data Studio installation directory.
b. Save a backup of the datastudio.bat file.
c. Open the datastudio.bat file in a text editor. To set the environment variable, add one of the
following options to the last line of text in the file:
• If you want to enable logging on the client machine, enter the following text:
-DDDM_GENJDBC_LOG = <config file location and name> -javaagent:.\lib
\GenericJDBC.jar= host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
-javaagent:.\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port>
The line that you modified is similar to the following text:
• java -Dfile.encoding=UTF-8 -Xms512M – true -DDDM_GENJDBC_LOG = <config file
location and name> -javaagent:.\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port> -cp ".\lib\ads.jar;%ADS_PATH%"
com.aquafold.datastudio.DataStudio
d. Save datastuido.bat.
4. If you want to use datastudio.exe to launch Aqua Data Studio, complete the following steps:
a. Find the datastudio.cfg file. You can find the file in the Aqua Data Studio installation directory.
b. Save a backup of the datastudio.cfg file.
c. Open the datastudio.cfg file in a text editor. To append the -javaagent argument to the Java
command line, add one of the following Java agent arguments before the -cp text:
• If you want to enable logging on the client machine, enter the following text:
-DDDM_GENJDBC_LOG = <config file location and name> -javaagent:.\lib
\GenericJDBC.jar= host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
-javaagent:.\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port>
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
<SQL Developer installation>\sqldeveloper\lib
3. Find the sqldeveloper.conf file. You can find the file in the in the following location:
<SQL Developer installation>\sqldeveloper\bin
4. Save a backup of the sqldeveloper.conf file.
5. Open the sqldeveloper.conf file in a text editor. To append the -javaagent argument to the Java
command line, add one of the following options to the file:
• If you want to enable logging on the client machine, enter the following text:
AddVMOption -DDDM_GENJDBC_LOG = <config file location and name>
AddVMOption -javaagent:..\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
Add VM Option -javaagent:..\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port>
6. Save sqldeveloper.conf.
7. Use sqldeveloper.exe to launch Oracle SQL Developer.
SQuirreL Configuration
Complete the following steps to configure the SQuirreL SQL client:
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
<SQuirreL installation>\lib
3. Find the squirrel-sql.bat file. You can find the file in the SQuirreL installation directory.
4. Save a backup of the squirrel-sql.bat file.
5. Open the squirrel-sql.bat file in a text editor. To append the -javaagent argument to the Java command
line, add one of the following options to the penultimate line of text in the file:
• If you want to enable logging on the client machine, enter the following text:
-DDDM_GENJDBC_LOG = <config file location and name> -javaagent:.\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
-javaagent:.\lib\GenericJDBC.jar= host:<ddm_host>,port:<ddm_generic_service_port>
The line that you modified is similar to the following text:
"%LOCAL_JAVA%" -verbose -Xmx256m -Dsun.java2d.noddraw=true -DDDM_GENJDBC_LOG =
<config file location and name> -javaagent:.\lib\GenericJDBC.jar=
host:<ddm_host>,port:<ddm_generic_service_port> -cp %SQUIRREL_CP% -
WebLogic Configuration
Follow the configuration steps for the WebLogic client based on the operating system.
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
...\Middleware\user_projects\domains\base_domain\lib
For example, you might save the file in the following location:
C:\Oracle\Middleware\user_projects\domains\base_domain\lib
3. Find the startWebLogic.cmd file. You can find the file in the following directory:
...\Middleware\user_projects\domains\base_domain\bin
For example, the file might be in the following location:
C:\Oracle\Middleware\user_projects\domains\base_domain\bin\startWebLogic.cmd
4. Save a backup of the startWebLogic.cmd file.
5. Open the startWebLogic.cmd file in a text editor. To append the -javaagent argument to the Java
command line, enter one of the following options below the comments section:
• If you want to enable logging on the client machine, enter the following text:
set JAVA_OPTIONS=% JAVA_OPTIONS % -DDDM_GENJDBC_LOG = <config file location and
name> -javaagent:..\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
set JAVA_OPTIONS=% JAVA_OPTIONS % -javaagent:..\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
6. To set the classpath, add the following line after the classpath lines and before the echo statements:
set CLASSPATH=%CLASSPATH%;..\lib\GenericJDBC.jar; ..\lib\informatica-jdbc-
db2-5.1.2.HF1.jar;<additional drivers>
7. Save startWebLogic.cmd.
1. Copy the GenericJDBC.jar file. You can find the file in the following location:
<Dynamic Data Masking installation>/Wrappers/jdbc/GenericJDBC.jar
2. Save the GenericJDBC.jar file in the following location:
.../Middleware/user_projects/domains/base_domain/lib
WebLogic Configuration 83
For example, you might save the file in the following location:
C:/Oracle\Middleware/user_projects/domains/base_domain/lib
3. Find the startWebLogic.sh file. You can find the file in the following directory:
...\Middleware\user_projects\domains\base_domain\bin
For example, the file might be in the following location:
C:\Oracle\Middleware\user_projects\domains\base_domain\bin\startWebLogic.cmd
4. Save a backup of the startWebLogic.cmd file.
5. Open the startWebLogic.cmd file in a text editor. To append the -javaagent argument to the Java
command line, enter one of the following options below the comments section:
• If you want to enable logging on the client machine, enter the following text:
set JAVA_OPTIONS=% JAVA_OPTIONS % -DDDM_GENJDBC_LOG = <config file location and
name> -javaagent:..\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
• If you do not want to enable logging on the client machine, enter the following text:
set JAVA_OPTIONS=% JAVA_OPTIONS % -javaagent:..\lib
\GenericJDBC.jar=host:<ddm_host>,port:<ddm_generic_service_port>
6. To set the classpath, add the following line after the classpath lines and before the echo statements:
set CLASSPATH=%CLASSPATH%;..\lib\GenericJDBC.jar; ..\lib\informatica-jdbc-
db2-5.1.2.HF1.jar;<additional drivers>
7. Save startWebLogic.cmd.
The Dynamic Data Masking installation includes ODBC DLL files for 64-bit and 32-bit Windows. You must
save the files in the Windows system directory and create Dynamic Data Masking host and port environment
variables. Optionally, you can create a Data Source in the Windows ODBC Administrator to test the
configuration.
• The client must have a Windows 7 or Windows Server 2008 operating system.
• The Windows user that performs the setup must have administrator privileges to modify the ODBC DLLs
or the user must have the rights to grant the required privileges or ownership.
• The Windows user that performs the setup must have the required permissions to rename the Windows
ODBC Driver Manager (odbc32.dll).
85
Step 2. Grant File Permissions
Before you set up the Driver Manager proxies, you must identify the Driver Manager proxy that you want to
install on the target application architecture and the ODBC 32-bit or 64-bit usage. Then you must change the
owner of the Windows Driver Manager files and grant the required permissions.
1. Find the ODBC Driver Manager DLL file on the Windows machine based on the architecture of the target
application. Complete the remaining steps for each of the required Driver Manager files on the machine.
a. On a 64-bit Windows machine, you can find the following Windows Driver Manager files:
• 64-bit Driver Manager: <Windows installation>\System32\odbc32.dll
• 32-bit Driver Manager: <Windows installation>\SysWOW64\odbc32.dll
b. On a 32-bit Windows machine, you can find the following Windows Driver Manager file:
• 32-bit Driver Manager: <Windows installation>\System32\odbc32.dll
2. In Windows Explorer, right-click the file and click Properties.
The odbc32.dll Properties window opens. The following image shows the window:
12. In the window that opens, select the Administrators group again and click Edit.
The Permission Entry for odbc32.dll window opens.
13. Click the Allow box in the Full control row to grant full permissions to the user.
The following image shows the window with the Full control box selected:
Complete the following steps for each Windows Driver Manager file that you edited in the previous step.
1. Rename the Windows Driver Manager file to odbc32.dll. Rename odbc32.dll to odbc32o.dll.
4. Rename the Dynamic Data Masking Generic ODBC DLL file to odbc32.dll for the 64-bit and 32-bit
version.
a. Rename GenericOdbc64.dll to odbc32.dll.
b. Rename GenericOdbc32.dll to odbc32.dll.
1. Open the Windows Start menu, right click Computer, and click Properties.
The Control Panel opens.
2. On the right side of the Control Panel, click Advanced system settings.
The System Properties window opens.
3. In the System Properties window, click the Advanced tab and click Environment Variables at the bottom
of the window.
The Environment Variables window opens.
4. Under the System variables box, click New.
The New System Variable window opens.
5. In the New System Variable window, enter the following properties:
Variable Name
DDM_HOST
Variable Value
DDM_PORT
Variable Value
3. Click Add.
The Create New Data Source window opens.
4. In the Create New Data Source window, click the name of the ODBC driver that you want to create a DSN
for, based on the database that you want to connect to.
5. Click Finish.
The ODBC setup window opens.
You configure MicroStrategy so that the name of the user running a report can be included in an SQL query.
After you configure MicroStrategy and Dynamic Data Masking, you can take actions on Dynamic Data
Masking SQL queries based on user context.
For information about capturing MicroStrategy user context through Dynamic Data Masking, see the H2L
"How to Capture MicroStrategy User Context through Dynamic Data Masking."
93
public static String execute(RuleContext ctx) {
return null;
}
}
2. Compile the code and create a JAR file. For example, microStrategyAccelerator.jar.
3. Configure the JAR file in a Java Action in Dynamic Data Masking.
4. Place the Java Action as the first rule in the rule set.
After configuring the Java Action, a symbol value for MICROSTRATEGY_USERNAME populates the
MicroStrategy user name each time a report runs.
5. Create masking rules based on the symbol MICROSTRATEGY_USERNAME.
Access Control
This chapter includes the following topics:
You can set permissions on domain, database, and security rule set nodes to define the users that can edit
the nodes.
A Dynamic Data Masking user can be a privileged user or a non-privileged user. The type of user and type of
permissions the user has on a node in the Management Console tree determines whether the user can view
or make changes to the node.
Privileged User
A privileged user is a user that is connected to the the Dynamic Data Masking Server as the administrator or
an LDAP user that belongs to the Dynamic Data Masking administration group.
Privileged users have full access control on the Management Console tree nodes. Privileged users can set
privileges and perform any operation on Management Console tree nodes.
95
Non-Privileged User
A non-privileged user is a user that does not belong to the Dynamic Data Masking administration group.
In the Management Console tree, domain, database, and security rule set nodes have authorization
properties. Authorization properties define which operations a non-privileged user can perform on
Management Console tree nodes.
Non-privileged users cannot edit the Management Console Dynamic Data Masking Server node or the Server
node children, such as service nodes, logger nodes, and appender nodes. A non-privileged user can have
ownership, read, or read and write privileges on domain, database, and security rule set nodes in the
Management Console tree.
The following table describes the authorizations a non-privileged user can have on a database, domain, or
security rule set node in the Management Console tree:
Authorization Description
Ownership The LDAP user or group owns the node. A node owner has full access control to the node.
A node owner can perform the following operations on the node:
- Set Authorizations
- Read operations
- Write operations
Read The LDAP user or group has read privileges on the node.
A user with read privileges can perform the following operations on the node:
- View the node details
- View the children of the node
Read authorizations are required on the source node for the copy node operation.
Read and Write The LDAP user or group has read and write privileges on the node.
A user with read and write privileges can perform the following operations:
- View the node details
- View the children of the node
- The read privilege is required on the source node for the copy node operation.
- Add
- Edit database and security rule set node details
- Copy (destination node)
- Move (source and destination node)
- Remove (parent and child nodes)
- Edit domain, database, and security set names (parent and child nodes)
The following table describes the operations a non-privileged user can perform on a database, domain, or
security rule set node and the authorizations the user must have:
Operation Authorizations
Copy node Ownership or read authorizations on the source node and the descendants of the
source node. Ownership or read and write privileges on the destination node.
The user that creates the node is the owner of the node.
Move node Ownership or read authorizations on the source node and the descendants of the
source node. Ownership or read and write privileges on the destination node.
Remove node Ownership or read and write authorizations on the parent node. Ownership or read and
write authorizations on the child node.
Edit database details Ownership or read and write authorizations on the node.
Edit security rule details Ownership or read and write authorizations on the node.
Edit domain name Ownership or read and write authorizations on the parent node. Ownership or read and
write authorizations on the child node.
Edit security rule set name Ownership or read and write authorizations on the parent node. Ownership or read and
write authorizations on the child node.
Edit database name Ownership or read and write authorizations on the parent node. Ownership or read and
write authorizations on the child node.
You can add or copy a Management Console tree node to create a node.
Property Default
Owner User that creates the node. If the user is logged into Dynamic Data Masking as the administrator,
the owner property is empty.
The owner property is set to copied nodes and copied child nodes.
1. In the Management Console, select a domain, database, or security rule set node.
2. Click Tree > Authorization.
The Authorize User or Group window appears.
3. Define a node owner in the owner field and define read and write privileges for users and LDAP groups in
the Authorize User or Group window.
4. Click Ok.
Logs
This chapter includes the following topics:
• Logs Overview, 99
• Audit Trail and Reporting, 100
• Loggers, 105
• Appenders, 108
• Log Levels, 115
• JDBC Logging, 116
• ODBC Logging, 119
Logs Overview
A log file maintains a history of events. Dynamic Data Masking creates log files and contains system loggers
that record Dynamic Data Masking Server, service, and rule events. You can create custom loggers to log
information that you specify in a security rule. Set log levels to determine which loggers send log information.
The Management Console Tree contains logger and appender nodes that create log files. The system loggers
create the audit trail, rule, and server logs. You can add custom loggers to the Management Console tree that
you use in security rules to specify events that you want to log. You can add appender nodes under loggers to
specify how and where to log the event information. Set the Log Level property of the Dynamic Data Masking
Server to specify the severity level of the events that you want to log.
You can use the log files to identify a problem with the Dynamic Data Masking Server, and to monitor and
troubleshoot problems a Dynamic Data Masking service.
In addition to the system loggers, Dynamic Data Masking creates the following log files:
<year>_<month>.at
Detailed audit trail file. Contains detailed information about changes made within the Management
Console. Dynamic Data Masking uses the year and the month that it creates the file to name the file.
DDMError.txt
DDMOutput.txt
You can find log files in the following location: <Dynamic Data Masking installation>/log
99
Note: If you choose the Dynamic Data Masking Management Console installation without the Dynamic Data
Masking Server, the installation does not create a log directory.
The auditTrail.log file contains general audit information about changes in the configuration.
The detailed audit trail log file contains comprehensive audit information about modifications to the Dynamic
Data Masking configuration properties. Dynamic Data Masking names the detailed audit file according to the
year and month that it creates the file. For example, if Dynamic Data Masking creates a detailed audit file in
April 2019, it names the file 2019_04.at. You can use the detailed audit trail log files as input to the audit
command and generate audit trail reports. The audit trail report shows all changes made by users for
selected objects in the specified time frame.
The detailed audit file contains information about audit trail operations and their sources (the Management
Console or the Server Control command line program) for the following Dynamic Data Masking objects.
Database
The detailed audit file contains information about the following audit trail operations and their sources for
the Dynamic Data Masking database object:
Add Yes -
Remove Yes -
Copy Yes -
Move Yes -
Edit Yes -
Import No Yes
Export No Yes
Domain
The detailed audit file contains information about the following audit trail operations and their sources for
the Dynamic Data Masking domain object:
Add Yes -
Copy Yes -
Move Yes -
Edit Yes -
Remove Yes -
Service
The detailed audit file contains information about the following audit trail operations and their sources for
the Dynamic Data Masking service:
Add Yes -
Remove Yes -
Authorization
The detailed audit file contains information about the following audit trail operations and their sources for
Dynamic Data Masking authorization:
Edit Yes -
Move Yes No
Remove - Yes
Backup - Yes
Restore - Yes
Lock - Yes
Support No Yes
License Yes -
Reload - Yes
Shutdown - Yes
Rename - Yes
Password - Yes
Network - Yes
Port - Yes
Import in the command line can create or modify an object, Dynamic Data Masking audits it as an import
operation. Import in the Management Console can modify an object, Dynamic Data Masking audits it as an
edit operation.
Dynamic Data Masking audits Export in the command line as an export operation. Dynamic Data Masking
does not audit Export in the Management Console.
Dynamic Data Masking 9.9.1 audit trail reports do not show connection rule or security rule set changes.
For information about the audit command, see “Audit” on page 141.
The following table describes the standard audit trail report fields:
Field Description
Notes Contains data from the audit trail log reference field and any
warning messages.
The following table describes the compact audit trail report fields:
Field Description
Notes Contains data from the audit trail log reference field and any
warning messages.
Loggers
A logger is a Management Console tree node that uses Apache log4j to create a log of events.
You can add logger nodes in the Management Console tree under the Loggers node.
Use loggers to specify the events that you want to log and use appenders to define how to log the event. You
can use pre-defined system loggers to log Dynamic Data Masking Server, service, and rule events. You can
create custom loggers to log security rule events that you specify with the Log Message rule action.
Loggers 105
Logger nodes can have multiple appender child nodes. When you use the logger in a security rule, the logger
logs the event in each format specified by the child appender nodes.
Dynamic Data Masking contains pre-defined system loggers and appenders that log Dynamic Data Masking
service events and rule events. You cannot edit or delete the system loggers.
The Loggers node is a child of the Dynamic Data Masking Server node in the Management Console tree.
Because it is a child of the Dynamic Data Masking Server node, only Dynamic Data Masking administrators
can edit and create child nodes of the Loggers node. Non-privileged users cannot edit or move logger and
appender nodes and an administrator cannot delegate permissions to a non-privileged user.
The following image shows the Loggers node and the child system logger nodes:
System Loggers
A system logger is a pre-defined logger node in the Management Console tree that logs Dynamic Data
Masking Server, service, and rule events.
The Management Console tree contains userReplacement, auditTrail, and rootLogger system loggers. The
system loggers use Rolling File appenders to create the audit trail, rule, and server logs. You cannot delete or
move the system loggers or the appenders. You can edit the Max File Size and Max Backups properties of the
system logger appenders, but you cannot edit the Type, Name, and File properties. If you edit a system logger
appender, Dynamic Data Masking immediately reconfigures config.properties and saves the file.
You can add appenders to the system loggers to log the same information in different formats.
Logs changes made within the Management Console. The AT appender of the auditTrail logger creates
the auditTrail.log file.
rule.log
Logs rules that the Rule Engine applies to incoming requests. In the Management Console, you can use
the Log When Rule is Applied box in the Edit Rule window to specify whether an occurrence of the rule is
logged. The UR appender of the userReplacement logger creates the rule.log file.
If multiple rule log files exist, Dynamic Data Masking appends each file name with a version number,
such as rule.log1. Dynamic Data Masking stores 10 rule log files by default. Rule logs update cyclically
and restart on rule.log1 when the logs are full.
By default, each rule log file stores up to 20 MB of data for a total of 200 MB. You can configure file size
and the maximum number of files in the UR appender.
You can use the Log Loader utility to load rule.log data into an Oracle, Db2, Informix, or Microsoft SQL
Server database. See Informatica Dynamic Data Masking Log Loader for information on the Log Loader
utility.
server.log
Logs server records, events, and error messages for internal troubleshooting of the Dynamic Data
Masking Server operations. The R appender of the rootLogger logger creates the server.log file.
If multiple server log files exist, Dynamic Data Masking appends each file name with a version number,
such as server.log1. Dynamic Data Masking stores up to 10 server log files at a time. Server logs
update cyclically and restart on server.log1 when the logs are full.
By default, each rule log file stores up to 20 MB of data for a total of 200 MB. You can configure file size
and the maximum number of files in the UR appender.
Custom Loggers
A custom logger is a logger that you create to use in a security rule to define events that you want to log.
Loggers have a Name property that you define when you create the logger. Logger names must be unique.
When you create a security rule with the Log Message rule action, you specify the name of the logger in the
rule and define the log level in the rule. The logger creates logs based on the appender child nodes of the
logger. The log level that you define in the Send As parameter of the security rule must be equal to or higher
than the log level that you define in the Dynamic Data Masking Server node. If the log level of the rule is a
lower severity than the Dynamic Data Masking Server Log Level parameter, the logger will not log the event.
After you create a logger and add an appender, you must use the Log Message rule action in a security rule to
define the events that you want the logger to log. If you do not use the logger in a security rule, it does not log
events.
Loggers 107
Because logger nodes are child nodes of the Dynamic Data Masking Server node, only a Dynamic Data
Masking administrator can create or edit a logger node. An administrator cannot delegate privileges to a non-
privileged user.
Note: Do not use system loggers with the Log Message rule action because you might not be able to perform
log analysis on the logs if they contain information from security rules.
1. In the Management Console tree, select the Loggers node and click Tree > Add Logger.
The Add Logger window appears.
2. Enter the name of the logger. The logger name can contain alphabetic characters, numbers, and
underscores (_), and must not exceed 60 characters.
3. Click OK.
The logger appears in the Management Console tree.
After you create a logger, you must add appenders to the logger to specify the log output format. You must
use the logger in a security rule with the Log Message rule action to define the events that you want to log.
Loggers Example
Your organization uses syslog to integrate log data from multiple types of systems. You want to add Dynamic
Data Masking logs to the syslog repository.
You add appenders to the userReplacement, auditTrail, and rootLogger system loggers. The appenders use
the syslog appender class. When Dynamic Data Masking writes to the auditTrail.log, rule.log, and server.log
files, it also creates a syslog output.
You create custom loggers and you add syslog appenders to the loggers. You use the loggers in security
rules based on the events that you want to log. Dynamic Data Masking sends the event data to the syslog
repository.
Appenders
An appender is a node in the Management Console tree that uses log4j classes to define the output format of
log information.
You can create appenders to log information in a format that is useful to your organization. You can use the
built-in Rolling File, Syslog, SMTP, and SNMP appenders, or create a custom appender to store log
information in any format. A logger can have multiple appenders.
Appenders are child nodes of logger nodes in the Management Console tree. Only administrators can create
and edit appender nodes. An administrator cannot delegate appender permissions to a non-privileged user.
The Dynamic Data Masking system loggers use Rolling File appenders to create the rule.log, auditTrail.log,
and server.log files. You cannot delete the system logger appenders. You can edit the system logger
appender Max File Size and Max Backups properties, but you cannot edit the Type, Name, and File properties.
You can add an appender to the system loggers to create an additional system log output.
The following image shows the system logger appenders in the Management Console tree and the UR
appender properties:
Rolling File appenders create plain text output files. You can use any plain text extension, such as .log or .txt.
The File property contains the file path and name of the log file. If you do not specify a complete file path, the
path originates in the Dynamic Data Masking installation directory. For example, the rule.log file has the
following File property:
log\rule.log
The Max File Size and Max Backups properties prevent the log files from becoming too large. When the log
reaches the Max File Size, the logger creates a new log file. When the number of files exceeds the Max
Backups number, the logger overwrites the first log file that it created. You can specify Max File Size and Max
Backups in megabytes or gigabytes by entering the number followed by MB or GB. Do not insert a space
between the value and the unit of measure.
Property Description
Name The name of the appender. The appender name can contain alphabetic characters, numbers, and
underscores (_), and must not exceed 60 characters.
File The file path and name of the log file. If you do not specify a complete file path, the path originates in
the Dynamic Data Masking installation directory. The file path and name cannot exceed 60 characters.
Max File Size The maximum size that the output file reaches before the appender creates a new file.
Default is 10MB.
Note: Do not enter a space between the value and the unit of measure.
Max Backups The number of backup files the appender creates before it overwrites the oldest file. Max Backups
must be a positive integer.
Default is 20.
Appenders 109
Syslog Appender
Create a Syslog appender to log information to a syslog repository.
If your organization uses a syslog system to store log information, you can use the Syslog appender to log
events. You can add Syslog appenders to the system loggers to log standard Dynamic Data Masking output
files to the syslog repository in addition to the Dynamic Data Masking log directory.
Property Description
Name The name of the appender. The appender name can contain alphabetic
characters, numbers, and underscores (_), and must not exceed 60
characters.
Facility The Syslog facility. A system administrator can set Facility to one of
the following strings: KERN, USER, MAIL, DAEMON, AUTH, SYSLOG,
LPR, NEWS, UUCP, CRON, AUTHPRIV, FTP, LOCAL0, LOCAL1, LOCAL2,
LOCAL3, LOCAL4, LOCAL5, LOCAL6, LOCAL7.
Default is USER.
Conversion Pattern The conversion pattern for the Syslog appender that you can modify.
By default, the pattern is:
%d{MM/dd HH\:mm\:ss,SSS} [%t] %-5p - %m%n
SMTP Appender
Create an SMTP appender to send log information as an email.
When you create an SMTP appender, the logger sends log events as an email. You can specify multiple email
addresses that you want to receive the email. For example, you can use the SMTP appender to send an email
when an error or fatal error occurs.
Before you use the SMTP appender, you must put the mail.jar file into the <Dynamic Data Masking
installation>/lib/ext directory. On Windows, you must remove and restart the Dynamic Data Masking
Server to update the Server configuration in the Windows Registry. On Unix, you must restart the Dynamic
Data Masking Server.
Property Description
Name The name of the appender. The appender name can contain alphabetic characters, numbers, and
underscores (_), and must not exceed 60 characters.
SMTP Host The SMTP relay mail server to use to send the email.
Debug Enable Debug to analyze the standard system output if an error occurs, such as an SMTP login
failure.
- server remove
- server start
• If the Dynamic Data Masking Server runs on Unix, enter the following command:
- server restart
Note: If you add, remove, or update the files in the <Dynamic Data Masking installation>/lib/ext
directory, you must configure the SMTP appender again.
SNMP Appender
Create an SNMP appender to send log information by using SNMP protocol.
Simple Network Management Protocol (SNMP) is a standard protocol for managing devices on IP networks.
Devices that typically support SNMP are routers, switches, servers, workstations, printers, and modem racks.
When you create an SNMP appender, you define the Generic Trap Type as a numeric value. You can find
information on trap types at
http://publib.boulder.ibm.com/infocenter/zvm/v5r4/index.jsp?topic=/com.ibm.zvm.v54.kijl0/trp.htm.
Before you use the SNMP appender, you must put the SNMPTrapAppender and OpenNMS jar files into the
<Dynamic Data Masking installation>/lib/ext directory. On Windows, you must remove and restart the
Appenders 111
Dynamic Data Masking Server to update the Server configuration in the Windows Registry. On Unix, you must
restart the Dynamic Data Masking Server.
Property Description
Name The name of the appender. The appender name can contain alphabetic characters,
numbers, and underscores (_), and must not exceed 60 characters.
Enterprise OID The object ID of the organization that sends the trap message. Set this parameter to any
value to identify the message.
Local Trap Send Port The port of the local SNMP embedded agent.
Generic Trap Type A number value that specifies the trap type. Set the Generic Trap Type to one of the
following values:
- 0. coldStart
- 1. warmStart
- 2. linkDown
- 3. linkUp
- 4. authenticationFailure
- 5. egpNeighborLoss
- 6. enterpriseSpecific
Forward Stack Trace with Specifies whether to include the stack trace as part of the log message.
Trap
Application Trap OID The object ID of the application that sends the trap message. Enter the name of the
application object.
• SNMPTrapAppender_1_2_9.jar
1. Place the SNMPTrapAppender and the OpenNMS jar file into the following directory:
<Dynamic Data Masking installation>/lib/ext
2. Close the Management Console.
3. Open a Server Control window.
• If the Dynamic Data Masking Server runs on Windows, enter the following commands:
- server stop
- server remove
- server start
• If the Dynamic Data Masking Server runs on Unix, enter the following command:
- server restart
Note: If you add, remove, or update the files in the <Dynamic Data Masking installation>/lib/ext
directory, you must configure the SNMP appender again.
Creating an Appender
Create an appender to specify the format of the log information.
1. In the Management Console tree, select a logger node and click Tree > Add Appender. You can add an
appender to a system logger or to a custom logger.
The Add Appender window appears.
2. Select the type of appender that you want to create. The appender properties change based on the
appender that you choose.
3. Enter the appender properties and click OK.
The appender appears as a child node of the logger node that you selected.
Custom Appender
Create a custom appender to log events in a custom format by using log4j classes.
A custom appender can use any log4j appender class and you can specify multiple properties for the
appender.
The built-in appenders have hidden properties that you cannot modify. A custom appender is an appender
that either has a class that does not match a built-in appender class or a property that does not match the
hidden properties of the built-in class. For example, the Rolling File appender has a hidden encoding property
set to UTF-8. You can create a custom rolling file appender that has a different encoding property.
The following image shows a custom appender that uses the console appender class:
Appenders 113
Custom Appender Properties
The following table describes the Custom appender properties:
Property Description
Name The name of the appender. The appender name can contain alphabetic characters, numbers, and
underscores (_), and must not exceed 60 characters.
Property Value The value of the property. Property Value can be any value that is represented as a string.
1. Create an appender and specify the type as Custom. Specify the RemoteHost, Port, and Appender Class.
Property Description
2. Click OK.
Create a security rule using the Any Matcher and the Log Message Action. Specify the Logger Message using
the CEF format. For more information, see the Informatica Dynamic Data Masking 9.9.1 User Guide.
Log Levels
The log level that you define in the Dynamic Data Masking Server node determines the severity of the event
that you want to log. You specify a log level in a security rule to define the severity of individual events.
The Dynamic Data Masking Server node contains a Log Level property that the Dynamic Data Masking
administrator can set to Information, Warning, or Error. The log level in the Dynamic Data Masking Server
node corresponds to the Send As property of the Log Message security rule action. If the Send As property is
an equal or greater severity than the Log Level property, the logger logs the event. If the Send As property is a
lower severity than the Log Level property, the logger does not log the event.
For example, you have the Log Level property of the Dynamic Data Masking Server set to Warning. You have
three security rules that use the Log Message rule action. The Send As property of Rule_1 is set to
Information Provides information about the normal behavior of the Dynamic Data Masking Server. Information logs
can include information about when a service starts or stops and when a user logs in or if the log in
fails.
Warning Provides information that you can use to find and analyze non-fatal abnormal states in the Dynamic
Data Masking Server. Warning logs can include information about a Dynamic Data Masking service start
or stop failure, or an error that occurs when you add a node in the Management Console tree.
Error Provides only error messages. Use the Error log level in production because it provides the best
Dynamic Data Masking performance.
1. In the Management Console tree, select the Dynamic Data Masking Server node and click Tree > Edit.
The Edit window appears.
2. Configure the Log Level property to the level that you want to log.
3. Click OK.
JDBC Logging
If you use the DDM for JDBC service, you can configure logging on the client machine to debug the JDBC
wrapper.
The Dynamic Data Masking installation contains a template configuration file for JDBC logging. You can find
the template in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\template.jdbcLogConfig.properties
The template.jdbcLogConfig.properties file contains the following properties:
ddm.logfile.name
ddm.logging.level
• ERROR. Provides the complete details of all errors and exceptions that affect the Dynamic Data
Masking processes.
ddm.logfile.limit
Specifies the maximum size of the log file before the logger creates a new file. For example, you might
enter 500KB or 200MB. A value of zero (0) indicates that there is no file size limit. Modifiable.
Default value: 0
ddm.logfile.count
The number of backup files the appender creates before it overwrites the oldest file. Modifiable.
Default value: 1
ddm.logfile.append
Specifies whether you want to overwrite the log file when you start the application. A value of false
indicates that you want to overwrite the log file with a new file. A value of true indicates that you want to
add new log information to the existing log file.
ddm.logfile.encoding
ddm.log.format
The following properties are used in the configuration of the log file name and are replaced at runtime:
• %t. The system temporary directory. For example, %t/DDM_GENJDBC.log causes the logs on Solaris to be
written to /var/tmp/DDM_GENJDBC.log and the logs on Windows 7 to be written to C:\Users\<username>
\AppData\Local\Temp.
• %h. The value of the user.home system property.
• %g. The generation number to distinguish rotated logs. If the file count is greater than one, the generation
number is added to the end of the file name.
• %u. A unique number used to resolve conflicts. If FileHandler tries to open the file when the file is in use
by another process, it increments the unique number and tries again until it finds a file name that is not in
use.
• %%. Translates to a percent sign (%).
Note: When you modify the JdbcLogConfig.properties file, you must restart the application for the new
properties to take effect.
1. Copy the template.jdbcLogConfig.properties file. You can find the file in the following location:
<Dynamic Data Masking installation>\Wrappers\jdbc\template.jdbcLogConfig.properties
2. Save the file to a directory that is accessible to the JVM.
3. Rename the file that you saved in the previous step to the following name:
JdbcLogConfig.properties
4. Create a VM argument and set the value of the argument to the file path and file name of the
JdbcLogConfig.properties file you saved in the previous step.
Create the following VM argument:
o -DDM_GENJDBC_LOG= <config file name and location>
For example, you might create the a VM argument with the following value:
-DDDM_GENJDBC_LOG =C:\DDM_GenJDBC\ JdbcLogConfig.properties
Dynamic Data Masking ODBC logging uses Apache log4cxx. The log4cxx library is consumed as a static
library and is packed inside the Drive Manager proxy DLL to avoid conflict with the client application libraries.
To enable logging, set up a user-level environment variable that points to a log4cxx configuration file.
The Dynamic Data Masking installation contains a template configuration file for ODBC logging. You can find
the template in the following location:
<Dynamic Data Masking installation>\Wrappers\odbc\template.odbcLogConfig.properties
The following text is an example of an ODBC configuration file:
########## Begin of non-modifiable properties section ##########
log4j.appender.ddmGenODBCFileAppender=org.apache.log4j.RollingFileAppender
log4j.appender.ddmGenODBCFileAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.ddmGenODBCFileAppender.layout.ConversionPattern=%d{MM/dd HH\:mm\:ss,SSS}
%-10X{processID} %-10t %X{app} (%F:%L) %-5p - %m%n
#only permitted to change the log level from INFO to either OFF or DEBUG, do not modify/
remove the ddmGenODBCFileAppender name.
log4j.rootLogger=INFO, ddmGenODBCFileAppender
#point the File property to the complete file path including the folder location and
prefixing the name provided to the log file name and adding the process id as a suffix
to the file name in order to support different instances of the same application running
at same time. Hence, do not remove the ${processID} suffix.
log4j.appender.ddmGenODBCFileAppender.File=C:\\logs\\Aquadata-${processID}.log
log4j.appender.ddmGenODBCFileAppender
log4j.appender.ddmGenODBCFileAppender.layout
Specifies that the rolling file appender uses a pattern layout. Not modifiable.
Specifies the logging format for the file appender. Not modifiable.
log4j.rootLogger
Specifies the log level and the appender for the root logger. Do not change the file appender name.
• INFO. Provides information about the normal behavior of the Dynamic Data Masking Server.
Information logs can include information about when a service starts or stops and when a user logs
in or if the login fails.
• DEBUG. Provides information for debugging issues, such as client connection details to the database
and the Dynamic Data Masking Server, connection rule results, and original and modified requests to
the database.
• OFF. Turns off logging.
log4j.appender.ddmGenODBCFileAppender.File
Specifies the file name and path of the log file. Modifiable.
Note: When you enter the path, you must include the -${processID}.log text after the file name. In the
example above, the file name is Aquadata.
log4j.appender.ddmGenODBCFileAppender.MaxFileSize
Specifies the maximum size of the log file before the logger creates a new file. Modifiable.
Default value: <Max log file size ex: 500KB, 20MB etc>
log4j.appender.ddmGenODBCFileAppender.MaxBackupIndex
The number of backup files the appender creates before it overwrites the oldest file.
Example value: 20
If you can launch the application through a .bat file, add the following line to the .bat file:
SET DDM_GENODBC_LOG=<log configuration file path>
For example, you might add the following text:
SET DDM_GENODBC_LOG= C:\logs\logConfig.properties
1. Define the DDM_GENODBC_LOG user-level environment variable. Set the value to the application-specific
configuration file that contains the log file path and log configuration for the application. The
environment variable has the following properties:
Variable name
DDM_GENODBC_LOG
Variable value
The file path of the logging configuration file for the application. For example, you might enter the
following text:
C:\DDMLoggingConfigs\ CognosLog4cxxConfig.properties
2. Start the application and test the ODBC connectivity. Verify that odbc32.dll is used and the configuration
file is read.
3. Remove the DDM_GENODBC_LOG environment variable or delete the variable value to ensure that the
variable does not affect applications that you start later.
High Availability
This chapter includes the following topics:
Informatica recommends that you use standard vendor software to implement high availability. However, if
you do not have high availability configured for your database, you can configure Dynamic Data Masking high
availability. To send requests to a secondary database if the primary database is unavailable, configure high
availability for the database. To send requests through a secondary Dynamic Data Masking Server if the
primary Dynamic Data Masking Server is unavailable, configure high availability for the Dynamic Data
Masking Server.
Create connection rules that use the Check Database Connection matcher to send requests to a primary
database if a connection exists, and to a secondary database if the primary database connection does not
exist. When you create the connection rule, you specify a database in the rule matcher and Dynamic Data
Masking verifies whether a connection to the database exists.
For more information about the Check Database Connection Matcher, see the Dynamic Data Masking User
Guide.
122
Configuring Database High Availability
Configure database high availability to send requests to a secondary database if the primary database is
unavailable.
1. In the Management Console, create a database node with a connection to the primary database and a
database node with a connection to the secondary database.
2. Add the Dynamic Data Masking service for the database type.
3. Open the connection rule set for the Dynamic Data Masking service and create a connection rule that
sends requests to the primary database if a connection to the database exists. Configure the following
properties for the connection rule:
Matcher
Select the Check Database Connection matcher. The Check Database Connection matcher verifies
whether a connection to the database exists.
Database
Rule Action
Select the Switch to Database action. The Switch to Database action sends the request to the
database if the matcher identified a connection.
Database
Processing Action
Select Stop if Applied. The Rule Engine does not continue to the next rule in the tree if Dynamic Data
Masking applied the Switch to Database rule action.
4. In the connection rule set, create a rule that sends the request to the secondary database if a connection
to the primary database does not exist. Configure the following properties for the connection rule:
Matcher
Database
Enter the name of the database node for the secondary database.
Rule Action
Database
Enter the name of the database node for the secondary database.
Processing Action
Informatica recommends that you use standard vendor solutions to provide Dynamic Data Masking high
availability. For example, you might use failover clustering for Microsoft SQL Server.
To configure Dynamic Data Masking Server high availability for Db2, you can create connection rules that use
the Load Control rule action. You must have at least two Dynamic Data Masking Server installations and the
database client must use the IBM JDBC driver for Db2. If the Dynamic Data Masking Server installations are
on the same machine, they must use different listener ports. After you configure Dynamic Data Masking
Server high availability, you can connect to the database through either of the Dynamic Data Masking Servers.
If one of the servers is unavailable, the request goes through the other server.
Verify the following prerequisites before you configure Dynamic Data Masking Server high availability for
Db2:
1. In the Management Console for the primary Dynamic Data Masking Server, create a connection to the
Db2 database. The database client must use the IBM JDBC driver to connect to the listener port that you
define for the connection.
Note: If the Dynamic Data Masking Server installations are on the same machine, you must configure
different listener port numbers for the primary and secondary servers.
2. Add the Dynamic Data Masking service for DB2.
3. Open the connection rule set for the Dynamic Data Masking service and create a rule folder that
identifies requests to the database. Configure the following properties for the rule folder:
Matcher
Select the Incoming DDM Listener Port matcher. The Incoming DDM Listener Port matcher identifies
requests based on the incoming listener port.
Incoming Port
Enter the listener port number for the primary Dynamic Data Masking Server.
Rule Action
Select the Folder rule action. The Folder rule action creates a rule folder.
Processing Action
Select the Stop if Matched processing action to process only the connection rules in the rule folder.
4. In the rule folder, create a connection rule that sets the priority levels of the Dynamic Data Masking
Servers. Configure the following properties for the connection rule:
Select the All Incoming Connections matcher. The All Incoming Connections matcher applies the
rule action to all SQL requests.
Rule Action
Select the Load Control rule action. The Load Control rule action identifies the Dynamic Data
Masking Servers and port numbers, and sets the server priority level. Configure the following Load
Control properties:
• Host. Enter the names of the Dynamic Data Masking Servers. Click the plus sign (+) to add
additional servers.
• Port. Enter the port number for each of the Dynamic Data Masking Servers.
• Priority. Enter a priority number for each of the Dynamic Data Masking Servers. The value of the
Priority property corresponds to the frequency that the client sends the request through the
Dynamic Data Masking Server.
Note: For more information about the Load Control action and how to set priority levels, see the
Dynamic Data Masking User Guide.
Processing Action
Select the Continue processing action. The Continue processing action sends the request to the
next rule in the tree.
5. In the connection rule folder, create another connection rule that sends the request to the database.
Configure the following properties for the connection rule:
Rule Action
Select the Switch to Database action. The Switch to Database action sends the request to the
database that you specify.
Database
Enter the name of the database node in the Management Console tree.
Processing Action
Incoming Port
Enter the listener port number for the secondary Dynamic Data Masking Server.
Rule Action
Processing Action
Select the Stop if Matched processing action to process only the connection rules in the rule folder.
11. Configure the following properties for the first rule in the rule folder:
Matcher
Rule Action
Select the Load Control rule action. Configure the following Load Control properties:
• Host. Enter the names of the Dynamic Data Masking Servers. Click the plus sign (+) to add
additional servers.
• Port. Enter the port number for each of the Dynamic Data Masking Servers.
• Priority. Enter a priority number for each of the Dynamic Data Masking Servers. Enter one (1) for
the secondary Dynamic Data Masking Server and zero (0) for the primary Dynamic Data Masking
Server.
12. Configure the following properties for the second rule in the rule folder:
Matcher
Rule Action
Database
Enter the name of the database node in the Management Console tree.
Processing Action
To verify which Dynamic Data Masking Server receives requests, you can run Dynamic Data Masking in debug
mode and check the log files to see which server provides debug information. You can also define database
nodes for different databases on each server and check which database Dynamic Data Masking sends the
request to.
Internally, when the Teradata JDBC Driver runs COP discovery, it appends "cop1" to the database hostname,
then proceeds to cop2, cop3, cop4. The driver will continue running DNS lookups sequentially until it
encounters an unknown COP hostname. Alternatively, if you have enabled the COPLAST connection
parameter, the driver runs DNS lookups until it finds a COP hostname whose IP address matches the coplast
hostname. For more information on the COPLAST connection parameter and the coplast hostname, see the
Teradata connectivity documentation.
When a client connects to a Teradata database through a JDBC URL, the client provides the hostname
without the COP suffix and the driver selects a COP entry to attempt to connect. If the first COP entry fails,
the driver tries another.
To perform COP discovery and connect to a Teradata database node, Dynamic Data Masking completes the
following process:
1. Dynamic Data Masking performs COP discovery once for all Teradata databases, when the Dynamic Data
Masking Server starts. This is in contrast to the Teradata driver, which performs COP discovery each
time a client connects to the database. If you add or change a Teradata database, Dynamic Data
Masking performs COP discovery again in that specific Dynamic Data Masking database. Dynamic Data
Masking uses the same naming convention as the Teradata driver to discover COP hosts.
2. Dynamic Data Masking discovers COP host names, unlike the Teradata driver, which discovers COP IP
addresses. During COP discovery Dynamic Data Masking generates a list of available COP hosts, which
could be empty. Dynamic Data Masking looks up COP hosts in the network DNS.
3. Internally, Dynamic Data Masking uses one high availability thread to periodically check connections to
all Teradata COP hosts and mark them as available or not available. The default time to periodically
check connections to COP hosts is three minutes. You can specify this amount of time in the
jvm.params file with the verify.connection.time parameter, which is configured in milliseconds.
4. When a client connects to a Teradata database through Dynamic Data Masking, Dynamic Data Masking
randomly connects to a COP discovered host that the high availability thread marked as available. Before
Dynamic Data Masking makes the connection, it checks that the COP host is still available by creating a
socket to the COP host and port. The process of creating a socket to the host and port takes a few
milliseconds, if the COP host is up and running. The process takes no longer than twice the amount of
time specified in the verify.connection.time parameter for the high availability thread. You can also set
Dynamic Data Masking stores the names of COP hosts and not the IP addresses. If an IP address changes in
the DNS table, Dynamic Data Masking resolves a known COP host to a new IP address. However, if the COP
host is down, Dynamic Data Masking is unable to resolve the COP host to an IP address in the DNS. If no
other COP hosts are available, the connection might fail.
You cannot disable COP discovery in Dynamic Data Masking. If COP discovery does not return any COP
hosts, Dynamic Data masking connects directly to the host specified in the Teradata database form.
Implementation Scenarios
Dynamic Data Masking as a Single Point of Failure
The following diagram shows an example implementation where the Teradata instance details are configured
on the same machine where Dynamic Data Masking is installed. In this scenario, the client sends a request to
Dynamic Data Masking and Dynamic Data Masking creates the InetSocketAddress object.
10.65.42.233 INSUTD14ILM261cop2
and
10.65.42.233 INSUTD14ILM261cop2.informatica.com
To achieve high availability for the Dynamic Data Masking servers and utilities in a distributed environment,
configure Dynamic Data Masking as a node in ZooKeeper. When a server or service is down, ZooKeeper
continues to provide uninterrupted service by switching queries to a server, or service, that is running.
1. Use the Dynamic Data Masking Server control commands to create the parent node in ZooKeeper.
a. Start the Dynamic Data Masking Server.
b. Connect to ZooKeeper with the following command:
server zookeeper connect -url '<zookeeper_ensemble>'
Hive Connectivity with Apache ZooKeeper and Dynamic Data Masking 131
Example:
server zookeeper connect -url 'server1.informatica.com:2181,
server2.informatica.com:2181, server3.informatica.com:2181'
c. Create a parent node in ZooKeeper with the following command:
server zookeeper create -path /<name of parent node>
Example:
server zookeeper create -path /DDM
The following image shows a database cluster with two database nodes and a Dynamic Data
Masking cluster with two server nodes:
2. Create an ephemeral node for each Dynamic Data Masking Server in the cluster. Initialize the ephemeral
node with the URL copied from the database node of hiveserver2. Replace the host and port values of the
database URL with the corresponding host and port values of the Dynamic Data Masking Server.
If the environment is enabled for Kerberos, replace the Hive service principal with Dynamic Data Masking
service principal defined in the /cfg/ddm.security file.
server zookeeper create -path /<DDM cluster path> -type EPHEMERAL -data
<driver_url_toddm_cluster>
Example:
server zookeeper create -type EPHEMERAL -path /DDM/ddm1 -data
"hive.server2.instance.uri=<ddmhost1>:<DDM hive service
port>;hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.se
rver2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=<ddmhost1>;hive.server2.thri
Hive Connectivity with Apache ZooKeeper and Dynamic Data Masking 133
hiveDatabase;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;auth=kerberos;
kerberosAuthType=fromSubject
Note: If you do not provide the databaseName, Dynamic Data Masking connects to the default Hive
database.
For Kerberos-enabled Hive databases, replace the _HOST value of the property
hive.server2.authentication.kerberos.principal=hive/_HOST@KERB with the Hive server hostname
in the Hive configuration.
After you replace the _HOST value, the property value appears as: _HOST
hive.server2.authentication.kerberos.principal=hive/your.host.com@KERB
7. In the Database Server Principal field, specify the server principal of Hive.
Example: hive/hiverserver@REALM.COM
Do not enter a value in the DBA Username and DBA Password fields.
8. Click Test Connection.
A JDBC connection to the Dynamic Data Masking service opens. The services uses the defined database
connection parameters. A confirmation message appears if the connection is valid.
REM *********************************************
REM Connect DDM Server to Zookeeper
REM *********************************************
CALL server zookeeper connect -url '<zookeeper_ensemble>'
REM *********************************************
REM Create a new ephemeral node for the DDM Server
REM *********************************************
CALL server zookeeper create -type EPHEMERAL -path /<DDM cluster>/<name of ephemeral
node> -data "hive.server2.instance.uri=<ddmhost1>:<DDM Hive service
port>;hive.server2.authentication=KERBEROS;hive.server2.transport.mode=binary;hive.se
rver2.thrift.sasl.qop=auth;hive.server2.thrift.bind.host=<ddmhost1>;hive.server2.thri
ft.port=<DDM Hive service
port>;hive.server2.use.SSL=false;hive.server2.authentication.kerberos.principal=<hive
service principal for ddmhost1>"
5. Save the serverStarting script file in the Dynamic Data Masking installation directory.
When the Dynamic Data Masking Server starts, the server runs the serverStarting script file.
6. Repeat steps 3-5 for other Dynamic Data Masking servers in the cluster.
After you connect to the ZooKeeper server, you can manage Dynamic Data Masking servers and services on
ZooKeeper with the following commands:
• create
• delete
• disconnect
• get
Hive Connectivity with Apache ZooKeeper and Dynamic Data Masking 135
• list
• update
Some commands must include parameters such as data, the path to the node, the node type, and the URL to
ZooKeeper. Use the following table to determine which parameters (-data, -path, -type, -url) are required and
optional for each command:
delete - Required - - Deletes the parent node and children nodes recursively.
update Optional Required - - Updates the existing node in the specified path.
The following process describes how the Dynamic Data Masking Server attempts to restore the connection
and re-create the ephemeral node:
1. During the connection, Dynamic Data Masking provides the session timeout value to ZooKeeper.
ZooKeeper manages session expiration.
2. If the connection is lost, Dynamic Data Masking attempts to restore the broken connection within the
specified timeout period in order to retain the ephemeral nodes.
3. When ZooKeeper does not receive a response from Dynamic Data Masking within the specified session
timeout period, it expires the Dynamic Data Masking session. When the session expires, ZooKeeper
removes all ephemeral nodes owned by the session and immediately notifies all connected clients,
which are watching the nodes, about the change.
4. The Dynamic Data Masking instance associated with the expired session remains disconnected from
ZooKeeper, and is not notified of the session expiration until it reconnects with ZooKeeper. When the
connection with ZooKeeper is reestablished, Dynamic Data Masking receives a Session Expired
notification. At this point, Dynamic Data Masking attempts to re-create all ephemeral nodes that were
created during the expired session.
By default, Dynamic Data Masking provides a session timeout value of 10,000 milliseconds (10 seconds) to
ZooKeeper. To modify the default value, set the zookeeper.timeout system property in the jvm.params file.
For example, to set a session timeout value of 12 seconds specify:
zookeeper.timeout=12000
For the minimum and maximum session timeout values that ZooKeeper allows the client to negotiate, see the
ZooKeeper documentation. For more information, contact your ZooKeeper administrator.
Dynamic Data Masking monitors the ephemeral nodes that it creates so it can restore the nodes and node
content as follows:
• When reestablishing connection with Zookeeper, Dynamic Data Masking restores the ephemeral nodes
and the content.
• If a third-party tool is used to remove the ephemeral nodes, or remove the parent cluster node and then
create it again, Dynamic Data Masking restores the ephemeral nodes.
• If a third-party tool is used to modify the ephemeral nodes, Dynamic Data Masking restores the node
content.
Hive Connectivity with Apache ZooKeeper and Dynamic Data Masking 137
Chapter 11
Server Control
This chapter includes the following topics:
Server Control has a set of commands that simplify management and configuration of local and remote
Dynamic Data Masking Servers. Server Control is installed with the Dynamic Data Masking Server. Run Server
Control on the machine where you installed the Dynamic Data Masking Server.
You can run the following types of commands from Server Control:
server
Use server commands to configure the local Dynamic Data Masking Server and Dynamic Data Masking
services. For example, you can start and stop the Dynamic Data Masking Server and services, set the
Dynamic Data Masking listener port, and view the Dynamic Data Masking Server version.
You can run the equivalents of the following Server Control server commands from shell scripts:
• start
• stop
• restart
• startDDMService
• stopDDMService
• restartDDMService
• status
138
server config
Use server config commands to perform configuration tasks on local and remote Dynamic Data
Masking Servers. For example, you can set Dynamic Data Masking database passwords, synchronize
Dynamic Data Masking Server configurations, and export and import security rule sets and Dynamic Data
Masking databases.
server service
Use server service commands to manage Dynamic Data Masking services. For example, you can
import and export Dynamic Data Masking services.
Important: Server Control stores information in the config.properties file. You must not modify the
config.properties file.
If you run the Dynamic Data Masking Server on Windows, Server Control runs as a batch file with a .bat
extension.
If you run the Dynamic Data Masking Server on Linux, Server Control runs as a shell script with no extension.
You can run the server shell script to use the Server Control commands. Alternatively, you can run a subset
of the Server Control commands from individual shell scripts. For example, to start the Dynamic Data
Masking Server, you can run the server shell script with the start command or you can run the start shell
script directly.
1. Select Start > Informatica > Dynamic Data Masking > Server Control.
2. At the command line, enter commands with the following syntax:
server <command name> <parameter>
For example, the following command sets the port for the Dynamic Data Masking Server to 8195:
server setPort 8195
1. Open a terminal, and navigate to the Dynamic Data Masking Server installation directory.
For example, you might navigate to the following directory:
/home/Informatica/DDM
2. Run the shell script for the Server Control command that you want to use.
• If you run the server shell script, enter commands with the following syntax:
./server <command name> <parameter>
• If you run a shell script for a specific command, run the shell script with the following syntax:
./<shell script name> <parameter>
Use the following rules when you enter commands and parameters:
Syntax Notation
Before you use Server Control, review the syntax notation.
Convention Description
Server Commands
Use Server Control server commands to configure the local Dynamic Data Masking Server and services.
• audit
• checkPort
• displayAddress
• encrypt
Audit
Generates audit trail reports to help you analyze specific data. The audit trail report shows all changes made
by users for selected objects in the specified time frame. The command also verifies that audit trail entries
were not altered. You can write the report to the console or to a CSV file.
Note: Use the audit command only with files generated by Dynamic Data Masking 9.9.1 or later.
Example 1:
server audit -report standard -file 2019_03.at -out D:\Audit\AuditReport.csv
Example 2:
server audit -report compact -start 2019_03 -end 2019_05 -out console
The audit command uses the following parameters:
report
Generate an audit trail report. You must specify a report type. Choose from a standard or a compact
report. Default is compact.
• standard. Contains the Date, Address, User name, Type, Path, Object name, Operation, Result,
Attribute, Old value, New value, and Notes.
• compact. Contains the Date, Address, User name, Type, Path, Object name, Operation, Result, and
Notes.
Read the specified audit log file or files. Insert a space to separate file names.
start
Date when you want Dynamic Data Masking to start reading audit log files. Dynamic Data Masking reads
all audit log files between the start and end dates you specify. If you do not specify a start date, Dynamic
Data Masking uses the audit log file for the current date.
end
Date when you want Dynamic Data Masking to stop reading audit log files. Dynamic Data Masking reads
all audit log files between the start and end dates you specify. If you do not specify an end date, Dynamic
Data Masking uses the audit log file for the current date.
out
Specify how to write the audit report. To write the report to the console, specify console. To write the
report to a file, specify a path to a CSV file. Default is console.
CheckPort
Checks if a port is available or locked. Use the command to identify ports that you can use for the Dynamic
Data Masking Server.
The checkPort command uses the port parameter. The value of the port parameter is the port number that
you want to check for the Dynamic Data Masking Server. The command uses the following syntax:
server
checkPort <port>
For example, you might enter the following command:
server checkPort 6002
DisplayAddress
The displayAddress command returns a message indicating whether the Dynamic Data Masking Server
network address has been set with the setAddress command. If the address has been set, it displays the
address.
Encrypt
Encrypts a file. Informatica Global Customer Support can decrypt files that you encrypt with the encrypt
command.
You can send an encrypted screenshot or log file that contains sensitive information to Informatica Global
Customer Support. For example, if you saved an unencrypted log archive with the support command, you can
use the encrypt command to encrypt the file before you send it to Informatica Global Customer Support.
Notify Informatica Global Customer Support that you encrypted the file with the Server Control encrypt
command.
/y
Forces the command to continue without user confirmation. If you use the /y parameter, Server Control
does not require confirmation to overwrite an existing file.
filename
filename-encrypted
The filename of the encrypted file. If you have a case open with Informatica Global Customer Support,
Informatica recommends that you include the case number in the name of the file.
Help
Displays descriptions and parameters for each Server Control command.
Log
Sets and displays the Dynamic Data Masking Server log level.
If you do not include a Log_Level parameter in the command, Server Control displays the current log level.
You can set the following log levels:
• INFO
• DEBUG
• WARN
• ERROR
Remove
Removes the service or daemon for the Dynamic Data Masking Server. The Dynamic Data Masking Server
must be stopped to run the command.
Rename
Renames the Dynamic Data Masking service on Windows or the daemon on Linux. The Dynamic Data
Masking Server must be shut down to run the command.
The rename command uses the name parameter. The value of the name parameter is the name that you want
to set for the Dynamic Data Masking Server. The command uses the following syntax:
server
rename <name>
For example, you might enter the following command:
server rename Informatica_DDM
On Linux and UNIX, Dynamic Data Masking does not check to verify that the Dynamic Data Masking Server
name you choose is not in use. Before you change the Server name, make sure that there is not a Dynamic
Data Masking Server in the same environment with the same name.
Restart
Restarts the Dynamic Data Masking Server.
RestartDDMService
Restarts a Dynamic Data Masking service configured in the Management Console. The Dynamic Data
Masking Server must be running.
The restartDDMService command uses the DDMService_Name parameter. The value of the DDMService_Name
parameter is the name of the Dynamic Data Masking service that you want to restart.
SetAddress
Sets the network address of the Dynamic Data Masking Server to enable IP address binding. The Dynamic
Data Masking Server must be shut down to run the command.
Use the setAddress command if you want multiple Dynamic Data Masking Servers to run on the same
machine and use the same listener port or if you want to use the same listener port for multiple Dynamic
Data Masking services when the same service is defined on different Dynamic Data Masking Servers. The
command creates a server.address=<IP address> property in the config.properties file and allows
Dynamic Data Masking to identify the server or service by <IP address>:<port> or <hostname>/<IP
address>:<port>.
You can use the displayAddres command to view the current network address and the removeAddress
command to remove the address.
SetInternalPassword
Sets a value for the Dynamic Data Masking Server internal password. The Dynamic Data Masking Server must
be shut down to execute the command.
The setInternalPassword command uses the password parameter. The value for the password parameter is
the internal password that you want to set for the Dynamic Data Masking Server.
SetPort
Sets the value of the server management port. The Dynamic Data Masking Server must be shut down to run
the command.
The setPort command uses the port parameter. The value of the port parameter is the port number that you
want to set for the Dynamic Data Masking Server.
Start
Starts the Dynamic Data Masking Server. Creates an operating system service in Windows if the service does
not exist. The Dynamic Data Masking Server must be shut down to run the command.
StartDDMService
Starts a Dynamic Data Masking service configured on the Dynamic Data Masking Server. The Dynamic Data
Masking Server must be running.
The startDDMService command uses the DDMService_Name parameter. The value of the DDMService_Name
parameter is the name of the Dynamic Data Masking service that you want to start.
Status
Shows whether the Dynamic Data Masking Server is running. The command also displays the name of the
Dynamic Data Masking Server and the number of the server management port.
Stop
Stops the Dynamic Data Masking Server.
The stopDDMService command uses the DDMService_Name parameter. The value of the DDMService_Name
parameter is the name of the Dynamic Data Masking service that you want to stop.
Support
Creates a .zip archive of Dynamic Data Masking logs. Send the encrypted log archive to Informatica Global
Customer Support to troubleshoot issues with Dynamic Data Masking.
• All files in the <Dynamic Data Masking installation>/cfg directory with the exception of the following
files:
- config.cfg. The archive contains config.cfg.xml, which is config.cfg in XML format. The XML file
does not contain user passwords.
- config.pbk
• All files in the <Dynamic Data Masking installation>/lib directory.
• All files in the <Dynamic Data Masking installation>/log directory.
• jvm.params
The following files are generated at runtime and included in the archive:
• DDM for <database>.txt. Contains statistics for running the Dynamic Data Masking services.
• environment.txt. Contains operating system environment parameters.
• java_properties.txt. Contains Java system properties.
Note: The support command does not save user passwords. Dynamic Data Masking replaces encrypted user
passwords with new encrypted passwords.
/y
Forces the command to continue without user confirmation. If you use the /y parameter, Server Control
does not require confirmation to overwrite an existing file.
filename
The name of the log archive file. If you have a case open with Informatica Global Customer Support,
Informatica recommends that you include the case number in the name of the file.
-source
Indicates that the following argument is a source. You can indicate a source to create a log archive of a
remote Dynamic Data Masking Server.
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
password
host
port
Note: You can encrypt an unencrypted log archive with the encrypt command.
Version
Displays the name and version of the Dynamic Data Masking Server. The Dynamic Data Masking Server must
be running.
• export
• import
• setDBPassword
• sync
• setKeyStore
Use the export command to export a Dynamic Data Masking database or security rule set from the Dynamic
Data Masking Server. You can then import the file into one or more Dynamic Data Masking Servers.
If you do not specify a source Dynamic Data Masking Server, the export command exports the object from
the local Dynamic Data Masking Server.
/y
Forces the command to continue without user confirmation. If you use the /y parameter, Server Control
does not require confirmation to overwrite an existing file. For example, you might use the /y parameter
in a script.
object full path
The path in the Management Console of the object that you want to export.
file
-source
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
host
port
Use the import command to import a Dynamic Data Masking database or security rule set into one or more
Dynamic Data Masking Servers. The type of object that you import must be the same type of object that you
specify as the location to import the object. For example, you can import a security rule set into a security
rule set. You can import an Oracle database into an Oracle database node, but you cannot import an Oracle
database into a Sybase database node.
If you do not specify a target Dynamic Data Masking Server, the import command imports the object into the
local Dynamic Data Masking Server. Use the -targets parameter to specify multiple target Dynamic Data
Masking Servers.
The full path in the Management Console where you want to import the object. If the parent path does
not exist in the Management Console tree, the command returns an error. If the object does not exist in
the path, Dynamic Data Masking creates the object. If the object exists in the path, the object must be
the same object type as the object that you want to import.
For example, you want to import a database into the following location in the Management Console tree:
Site\backup\SQL_SERVER_DB
The parent path is Site\backup and the object is SQL_SERVER_DB. The parent path must exist in the
Management Console tree. If the object does not exist, Dynamic Data Masking creates a database node
named SQL_SERVER_DB.
file
-targets
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
port
SetDBPassword
Sets the password for the Dynamic Data Masking database. The Dynamic Data Masking Server must be
running.
If you do not specify a target Dynamic Data Masking Server, the setDBPassword command updates the
password of the database on the local Dynamic Data Masking Server. Use the -targets parameter to specify
multiple target Dynamic Data Masking Servers.
dbpath
oldDBPassword
The password that Dynamic Data Masking uses to connect to the database.
newDBPassword
The new password that you want Dynamic Data Masking to use to connect to the database.
-targets
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
host
port
The sync command synchronizes databases, and security rule sets. You cannot synchronize the Dynamic
Data Masking Server, Dynamic Data Masking services, connection rules, and loggers. To copy a Dynamic
Data Masking service, you must export the service from the source location and import the service into the
target location.
You can specify one source Dynamic Data Masking Server and one or more target Dynamic Data Masking
Servers. You must specify at least one target or source. If you specify a source and not a target, the sync
command uses the local Dynamic Data Masking Server as the target. If you specify a target and not a source,
the sync command uses the local Dynamic Data masking Server as the source.
-source
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
host
port
-targets
SetKeyStore
Changes a default keystore to a custom keystore, or a custom keystore to a default keystore.
To change the keystore from default to custom, you provide the custom keystore name and alias in the
command. To change the keystore from custom to default, you provide the database user name and
password within the command. In both scenarios you must provide the database path.
You can also set another alias in a database object. The alias must already exist in the designated keystore.
You can set the alias, the keystore name, or both.
path
Path to the database configured for use with the keystore that you want to change.
storeName
alias
user
Username to access the database configured for use with the keystore.
password
Password for the database configured for use with the keystore.
• export
• import
Export
Exports a Dynamic Data Masking service. The Dynamic Data Masking Server must be running.
Use the export command to export a Dynamic Data Masking service and the connection rules associated
with the service into a file. You can then import the file into one or more Dynamic Data Masking Servers.
If you do not specify a source Dynamic Data Masking Server, the export command exports the service from
the local Dynamic Data Masking Server.
/y
Forces the command to continue without user confirmation. If you use the /y parameter, Server Control
does not require confirmation to overwrite an existing file. For example, you might use the /y parameter
in a script.
DDM Service
file
The name of the file that the Dynamic Data Masking service exports to.
-source
Indicates that the following argument is a source.
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
host
port
Note: You cannot use the sync command to synchronize Dynamic Data Masking services. To copy a service,
you must export the service from the source location and import the service into the target location.
Import
Imports a Dynamic Data Masking service. The Dynamic Data Masking Server must be running.
Use the import command to import a Dynamic Data Masking service and the connection rules associated
with the service into a Dynamic Data Masking Server. The type of service that you import must be the same
type of service that you specify as the location to import the service. For example, you can import a DDM for
Oracle service into a DDM for Oracle service, but you cannot import a DDM for Oracle service into a DDM for
Sybase service.
If you do not specify a target Dynamic Data Masking Server, the import command imports the service into
the local Dynamic Data Masking Server. Use the -targets parameter to specify multiple target Dynamic Data
Masking Servers.
DDM Service
The name of the Dynamic Data Masking service that you want to import. You must enter a valid name for
the service, such as "DDM for Oracle." If the service that you want to import does not exist, Dynamic Data
Masking creates the service in the Management Console tree. If you specify the name of an existing
service, the type of service that you import must be the same type as the service that you import into.
file
The file name of the Dynamic Data Masking service that you want to import.
-targets
user
The Dynamic Data Masking user name that you use to log in to the Dynamic Data Masking Server.
pwd
host
port
Note: You cannot use the sync command to synchronize Dynamic Data Masking services. To copy a service,
you must export the service from the source location and import the service into the target location.
Performance Tuning
This chapter includes the following topics:
Performance Factors
The primary factors that influence Dynamic Data Masking performance are network traffic and rule
processing. You can minimize Dynamic Data Masking overhead by using network traffic statistics to ensure
that the Dynamic Data Masking Server has sufficient available resources and by creating efficient rules that
do not cause unnecessary rule processing time.
Network Traffic
Network traffic is measured in the number of packets per second. To determine the number of cores that you
need to dedicate to the Dynamic Data Masking Server, you must measure the volume of the network traffic
between the client and the database.
The Dynamic Data Masking Server can handle 10,000 packets per second per core, or one packet in 100
microseconds. Therefore, if your network traffic is 40,000 packets per second, the Dynamic Data Masking
Server requires four cores. You can use a network analyzing tool such as Wireshark to measure your network
traffic.
If you install the Dynamic Data Masking Server on the database server, you must verify that the required
cores are available to the Dynamic Data Masking Server, in addition to the cores required by the database. If
the database server does not have the necessary cores, you can install the Dynamic Data Masking Server on
156
a dedicated machine. However, if the Dynamic Data Masking Server is installed on a separate machine,
network performance decreases. Informatica recommends that you install the Dynamic Data Masking Server
on the database server if possible.
In an Oracle database, the CPU consumption is linearly proportional to the SQL *Net traffic that the Dynamic
Data Masking service routes. You can use the SQL *Net and DBlinks traffic values to estimate the amount of
CPU that the Dynamic Data Masking service consumes.
The Dynamic Data Masking resource consumption is approximately 1% of the server CPU and has no I/O
overhead. The Dynamic Data Masking service requires approximately 1 GB for memory and some disk space
for logs.
To calculate the CPU consumption, you must determine the round-trip value, which is the total packet traffic
that a client and server sends and receives each second. The total round-trip value includes SQL *Net traffic
and DBlinks.
Use the following variables and equations to calculate the CPU consumption:
Variables
Equations
PR = (X1+X2)*2
Rule Processing
Rule processing causes Dynamic Data Masking overhead. Understanding the factors that contribute to rule
processing time is necessary in order to increase efficiency.
Dynamic Data Masking processing time varies based on the matcher or action type that you use. The
following list describes the performance impact for each rule type:
Dynamic Data Masking takes between 10 and 100 milliseconds to access the database and about 100
milliseconds to handle a packet.
The following matchers and actions require Dynamic Data Masking to access the database:
Dynamic Data takes between one and ten milliseconds to perform SQL parsing, depending on the
complexity of the statement.
Rules that do not access the database or require SQL parsing take Dynamic Data Masking between 10
and 50 microseconds (0.05 milliseconds) to complete.
Rule efficiency
Define a precise rule matcher to ensure that Dynamic Data Masking only applies the rule to the
necessary queries. You can use the Regular Expression Text matcher to determine whether the query
contains the relevant table. The Text matcher requires minimal processing time compared to other
matchers, such as the PL/SQL Function matcher and the SQL Syntax matcher.
Size of the query
The size of the database query affects how long it takes Dynamic Data Masking to parse the query.
Longer queries take longer to parse, and therefore increase processing time.
The following table lists the Dynamic Data Masking overhead for different query sizes. The overhead
percentages are for query execution, not end to end transactions.
A larger result set size does not increase Dynamic Data Masking overhead because Dynamic Data
Masking does not create additional work for the database. If you measure Dynamic Data Masking
overhead as a percentage of the total time it takes to return the result set, the overhead percentage
decreases as the result set size increases. Therefore, if you execute a query without fetching a result set,
the overhead percentage is higher than it would be for an end to end transaction.
Note: In stored procedure masking, the result set size does increase overhead because Dynamic Data
Masking must create temporary tables and populate the tables with masked data.
Complex rules increase Dynamic Data Masking processing time. For example, if you want to mask all the
columns in a table named table1, which has five columns, you send the following query to the database:
SELECT col1, col2, col3, col4, col5 from table1;
You can configure Dynamic Data Masking rules to mask the output in multiple ways, including the
following options:
• Option 1. Create one rule that masks every column if the table name is table1.
• Option 2. Create a rule folder that has five rules. Each rule matches the table name and a column
name, and masks the column.
Option 2 increases processing time because there are more masking rules. Simple, efficient rules, such
as the rule in Option 1, reduce processing time.
Performance of the rewritten query
When Dynamic Data Masking rewrites the original database query, the rewritten query is not optimized.
For example, the rewritten query may have a GROUP BY or ORDER BY clause that increases processing
time.
200 10%
500 20%
2,000 30%
8,000 40%
16,000 60%
50,000 100%
Log Performance
Log files use server resources to record service information and events. The tracing level determines the
amount of information that the log stores.
Logs consume system resources and can slow down performance. To improve log performance, you can
change the tracing level to reduce the amount of information that the log stores. By default, each log uses the
information tracing level. The information tracing level is a high impact tracing level and uses the most server
resources.
Note: If you change the tracing level, it does not affect the tracing level for SQL Server.
Information The default log level. Logs all information messages and provides comprehensive information about
the Dynamic Data Masking service. The information that the log provides is useful if you encounter
an issue with Dynamic Data Masking operations. You can refer to the logs to troubleshoot the
problem.
Performance impact is high.
Warn Logs warning messages from the Dynamic Data Masking service.
Performance impact is moderate.
Error Logs error messages and instances when a user connection is force closed.
Performance impact is low.
Set the user stack limit, or ulimit, to 1,024 kilobytes to ensure that system can create threads. To view the
user stack limit, enter #ulimit -s in the command shell. The system returns the value in kilobytes.
Troubleshooting
This chapter includes the following topics:
Troubleshooting Overview
The troubleshooting procedures described in this chapter aim to help you resolve commonly known issues.
Contact Informatica Global Customer Support if the troubleshooting procedures do not help you resolve a
problem.
Log Archive
You can create an encrypted log archive file that you can send to Informatica Global Customer Support to
troubleshoot issues with Dynamic Data Masking.
To create the log archive in Server Control, use the server support command. You can use the -noencrypt
parameter to create an unencrypted log archive. To encrypt an unencrypted log archive, use the server
encrypt command.
To create an encrypted log archive in the Management Console, select the Dynamic Data Masking Server in
the Management Console tree and click Tree > Support. The default file name is
support_encrypted_<date>.zip. You cannot create an unencrypted log archive in the Management Console.
If you have a case open with Informatica Global Customer Support, Informatica recommends that you include
the case number in the name of the file.
• All files in the <Dynamic Data Masking installation>/cfg directory with the exception of the following
files:
- config.cfg. The archive contains config.cfg.xml, which is config.cfg in XML format. The XML file
does not contain user passwords.
- config.pbk
161
• All files in the <Dynamic Data Masking installation>/lib directory.
• All files in the <Dynamic Data Masking installation>/log directory.
• jvm.params
The following files are generated at runtime and included in the archive:
• DDM for <database>.txt. Contains statistics for running the Dynamic Data Masking services.
• environment.txt. Contains operating system environment parameters.
• java_properties.txt. Contains Java system properties.
Note: The log archive does not contain user passwords. Dynamic Data Masking replaces encrypted user
passwords with new encrypted passwords.
Database Connections
This section provides solutions for problems that you might encounter when you connect to a database with
Dynamic Data Masking.
No Listener Defined
The TNS: No Listener error indicates that clients cannot reach the Dynamic Data Masking listener port.
If you receive this error, verify that the firewall configuration has the Dynamic Data Masking listener port and
administrator port open. For example, the default Dynamic Data Masking listener port for Oracle is 1525 and
the default administrator port is 8195.
Note: The TNS: No Listener error is an error for connections to Oracle databases. A similar error might
appear for a different database.
To resolve the TNS connection error, define the database service name.
The following table describes the possible reasons for the connection refusal:
Reason Solution
Error in the listener address Run a ping command from the system running the Management Console to the Dynamic
Data Masking service. Use the Dynamic Data Masking host name. For example: ping
10.65.48.73.
Error in the listener port Verify that the firewall has the Dynamic Data Masking listener port and administrator
port open.
Undefined service name Verify the database service name. Add a service name if one does not exist.
Cleanup Commands
Dynamic Data Masking runs cleanup commands before closing a connection pool to remove privileges from
the Dynamic Data Masking administrator.
Oracle
ALTER SESSION SET CURRENT_SCHEMA = <DDM admin user>
Sybase
SET PROXY <DDM admin user>
DB2
SET CURRENT SCHEMA <DDM admin user>
Database Keywords
This appendix includes the following topic:
Database Keywords
Dynamic Data Masking reserves certain keywords that the database parsers cannot parse. If you use these
keywords in an SQL query when you form a rule, the query might fail. For example, if a column name in the
SQL query contains one of these keywords, you might receive an invalid character error.
The following table lists the database keywords for Microsoft SQL Server and Oracle databases:
ABSOLUTE ABSENT
APPLY ATTRIBUTES
AT AUTO
ATTRIBUTES BASE64
AUTO BEGIN
BASE64 BIGINT
BIGINT BINARY
BINARY BIT
BIT BLOB
BLOB CHARACTER
CALL CLOB
CHARACTER COLUMNS
CLOB CONTENT
164
Keywords for Microsoft SQL Server Keywords for Oracle Databases
Databases
COLUMNS CONVERT
CONTAINED CROSS
CONTENT DATETIME
DATE DATETIME2
DATETIME DATETIMEOFFSET
DATETIME2 DBCLOB
DATETIMEOFFSET DEC
DEC DECFLOAT
DECIMAL DOCUMENT
DELAY DOUBLE
DISABLE ELEMENTS
DISABLE_OPTIMIZED_NESTED_LOOP EMPTY
DYNAMIC END
ELEMENTS ENTITYESCAPING
EMPTY EVALNAME
EVALNAME EXCEPT
EXPAND EXCLUDING
EXTERNALPUSHDOWN EXIT
FAST EXPLICIT
FAST_FORWARD FIRST
FIRST FULL
FLOAT HOLDLOCK
FN IF
FORCE ILIKE
FORCED IMAGE
FORCESCAN INCLUDING
FORCESEEK INDICATOR
FORWARD_ONLY INNER
GLOBAL INT
HASH IREGEXP
HINT JOIN
IIF LARGE
IMAGE LAST
INCLUDING LIKE2
INDICATOR LIKE4
INSENSITIVE LIKEC
INT MONEY
INTEGER NATURAL
KEEP NCHAR
KEEPFIXED NO
KEYSET NOENTITYESCAPING
LABEL NOHOLDLOCK
LARGE NOSCHEMACHECK
LAST NTEXT
LEVEL NULLS
LOCAL NUMERIC
LOGIN NVARCHAR
LOOP OBJECT
MAXDOP OFF
MAXRECURSION OFFSET
MAX_GRANT_PERCENT OPENXML
MIN_GRANT_PERCENT ORDINALITY
MONEY OUTER
NATURAL OVER
NEXT PARTITION
NO PASSING
NOCOUNT PATH
NOEXPAND PRESERVE
NOLOCK QUALIFY
NONE QUOTED_IDENTIFIER
NOWAIT REAL
NTEXT REF
NUMERIC REGEXP
NVARCHAR RETURNING
OBJECT RLIKE
OFFSET ROOT
OPTIMISTIC ROWCOUNT
OPTIMIZE SAMPLE
ORDINALITY SCHEMACHECK
OUTPUT SEMI
PAGLOCK SEQUENCE
PARAMETERIZATION SHARED
PARSE SIBLINGS
PATH SMALLDATETIME
PRECEDING SMALLMONEY
PRECISION SOME
PRESERVE STRIP
QUOTED_IDENTIFIER SUBPARTITION
RAW SYSTIMESTAMP
READCOMMITTED TEXT
READCOMMITTEDLOCK TINYINT
READPAST USE
READUNCOMMITTED USING
READ_ONLY VARBINARY
REAL WAIT
RECOMPILE WHEN
REDISTRIBUTE XMLAGG
REDUCE XMLATTRIBUTES
REF XMLCAST
RELATIVE XMLCOLATTVAL
REPEATABLE XMLCONCAT
REPEATABLEREAD XMLDOCUMENT
RESULT XMLELEMENT
ROBUST XMLEXISTS
ROOT XMLFOREST
ROW XMLNAMESPACES
ROWLOCK XMLPARSE
ROWS XMLPI
SCROLL XMLQUERY
SCROLL_LOCKS XMLROW
SERIALIZABLE XMLSERIALIZE
SETS XMLTABLE
SIMPLE XMLXSROBJECTID
SMALLDATETIME
SMALLINT
SMALLMONEY
SNAPSHOT
SPATIAL_WINDOW_MAX_CELLS
START
STATIC
STRING_SPLIT
SYSTEM
SYSTEM_TIME
TABLOCK
TABLOCKX
TEXT
TINYINT
TRY_CAST
TRY_CONVERT
TRY_PARSE
TYPE_WARNING
UNBOUNDED
UNDEFINED
UNKNOWN
UPDLOCK
USING
VARBINARY
VARCHAR
VIEWS
WRITE
XLOCK
XML
XMLAGG
XMLATTRIBUTES
XMLCAST
XMLCOLATTVAL
XMLCONCAT
XMLDOCUMENT
XMLELEMENT
XMLEXISTS
XMLFOREST
XMLNAMESPACES
XMLPARSE
XMLPI
XMLQUERY
XMLROW
XMLSERIALIZE
XMLTABLE
ZONE
C Data Vault 50
generic database 51
client configuration Hive 55
JDBC 79 IBM Db2 57
ODBC 85 Impala 60
command line Informix 61
parameters 140 management 18
server commands 140 Microsoft Azure SQL Database 62
server config commands 148 Microsoft SQL Server 64
server service commands 153 Netezza 66
syntax 140 Oracle 66
configuration PostgreSQL 71
management 20 Sybase 73
connection management Teradata 76
Data Vault 50 DBLink
generic database 51 using 68
Hive 55 Dynamic Data Masking
IBM Db2 57 administration 12
171
Dynamic Data Masking (continued)
architecture 13 L
components 13 LDAP
configuration 20 authentication 21
databases 18 listener ports
environments 17 defining 19
implementation 16 deleting 20
listener ports 19 log archive 161
process 15 log files
Server 19 audit trail 100
service 19 custom loggers 107
setting up 17 DDMError.txt 99
Dynamic Data Masking service DDMOutput.txt 99
Data Vault 13 detailed audit trail 100
Hive 13 loggers 105
IBM Db2 13 system loggers 106
Informix 13 year_month.at 99
Microsoft SQL Server 13 log levels
Oracle 13 setting 116
Sybase 13 log performance
Teradata 13 error 159
information 159
warn 159
G loggers
appenders 108
generic database custom appender 113
connection management 51 custom loggers 107
connection parameters 52 example 108
rule sets 55 Rolling File appender 109
Greenplum SMTP appender 110
connections 53 SNMP appender 111
Syslog appender 110
system loggers 106
H logging
ODBC 119
Hive logs
connection management 55 JDBC 116
connection parameters 56
I M
Management Console
IBM Db2 logging in 18
administrator required privileges 58 overview 17
connection management 57 Microsoft Azure SQL Database
connection parameters 57 administrator required privileges 63
CURRENT_SCHEMA symbol 58 connection management 62
Impala connection parameters 62
connection management 60 Microsoft SQL Server
connection parameters 60 administrator required privileges 65
Informix connection management 64
connection management 61 connection parameters 64
connection parameters 61 MySQL
connections 54
J
JDBC N
Apache Tomcat configuration 80 Netezza
Aqua Data Studio configuration 81 connection management 66
client configuration 79 connections 54
logs
configuring 118
Oracle SQL Developer configuration 82
SQuirreL configuration 82 O
WebLogic configuration 83 ODBC
client configuration 85
configuration requirements 85
172 Index
ODBC (continued) Server Control (continued)
create the environment variables 89 commands (continued)
create Windows data source 91 support 147
logging sync 152
configuration 120 version 148
Oracle config setDBPassword 151
administrator required privileges 68 overview 138
connection management 66 running 139
connection parameters 67 server commands 140
CURRENT_SCHEMA symbol 69 server config commands 148
using DBLink 68 server service commands 153
Oracle SQL Developer syntax 140
JDBC configuration 82 service
management 19
service ID 50
P service name 50
SID See also service ID
performance SQL Server
performance factors 156 connection management 64
performance tuning 156 SQuirreL
resource consumption example 157 JDBC configuration 82
rule processing 157 support
performance factors log archive 161
network traffic 156 sybase
Postgre SQL search and replace rule 75
CURRENT_PATH 72 Sybase
PostgreSQL administrator required privileges 74
administrator required privileges 72 connection management 73
connection management 71 connection parameters 74
connection parameters 71 symbols
CURRENT_SCHEMA 58, 69
R
Rule Engine 13
T
rule sets Teradata
generic database 55 administrator required privileges 77
configuring 78
connection management 76
S connection parameters 77
test connection 50
Server tns names 66
management 19 tnsnames file 69
Server Control tracing level 159
commands troubleshooting
CheckPort 142 connections 162
displayAddress 142 log archive 161
export 142, 149, 153 no listener defined 162
help 143 service refuses connection request 163
import 150, 154
log 143
remove 143
removeAddress 144
U
rename 144 user stack limit 160
restart 144 users
RestartDDMService 144 non-privileged user 96
services 145 non-privileged user properties 97
setAddress 145 privileged user 95
setDBPassword 151
SetInternalPassword 145
SetPort 145
start 146
W
StartDDMService 146 WebLogic
status 146 JDBC configuration 83
stop 146
StopDDMService 147
Index 173