0% found this document useful (0 votes)
36 views8 pages

Document 1054902.1

The document provides guidelines for validating network and name resolution setups for Oracle Clusterware and RAC, emphasizing the importance of proper configuration to avoid installation failures. It includes requirements for network settings, examples of expected configurations, and troubleshooting steps for network issues. The intended audience includes Oracle Clusterware/RAC Database Administrators and Oracle Support engineers.

Uploaded by

dmoralesc5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views8 pages

Document 1054902.1

The document provides guidelines for validating network and name resolution setups for Oracle Clusterware and RAC, emphasizing the importance of proper configuration to avoid installation failures. It includes requirements for network settings, examples of expected configurations, and troubleshooting steps for network issues. The intended audience includes Oracle Clusterware/RAC Database Administrators and Oracle Support engineers.

Uploaded by

dmoralesc5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

PowerView is Off Last Login: May 12, 2025 9:14 AM COT Daila Diana (Available) (0) Contact Us Help

Dashboard Knowledge Service Requests Patches & Updates Community Powered by AI. Do not input sensitive data

Give Feedback...
Copyright (c) 2025, Oracle. All rights reserved. Oracle Confidential.

How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1) To Bottom

In this Document Was this document helpful?

Purpose Yes
No
Scope
Details
Document Details
A. Requirement
B. Example of what we expect
Type:
C. Syntax reference Status: BULLETIN
Last Major PUBLISHED
D. Multicast 28-Mar-2025
Update:
E. Runtime network issues Last 28-Mar-2025
Update: English
F. Symptoms of network issues
Language:
G. Basics of Subnet
References
Related Products
Gen 1 Exadata Cloud at
APPLIES TO: Customer (Oracle Exadata
Database Cloud Machine)
Oracle Cloud Infrastructure -
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later Database Service
Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Backup
Oracle Database Backup Service - Version N/A and later Service
Oracle Database - Enterprise
Oracle Database - Enterprise Edition - Version 10.1.0.2 and later Edition
Oracle Database Cloud Exadata Service - Version N/A and later Oracle Database Cloud
Generic UNIX Exadata Service
Generic Linux Show More

PURPOSE Information Centers


Information Center: OCI
Networking - Email Delivery
Cluster Verification Utility (aka CVU, command runcluvfy.sh or cluvfy) does very good checking on the network and name
[2886428.2]
resolution setup, but it may not capture all issues. If the network and name resolution is not setup properly before installation, it
is likely the installation will fail; if network or name resolution is malfunctioning, likely the clusterware and/or RAC will have Information Center: OCI Cloud
issues. The goal of this note is to provide a list of things to verify regarding the network and name resolution setup for Grid Guard [2765346.2]
Infrastructure (clusterware) and RAC. Oracle Catalog: Information
Centers and Advisors for All
Products and Services [50.2]
SCOPE
Information Center:
Transportable Tablespaces
This document is intended for Oracle Clusterware/RAC Database Administrators and Oracle Support engineers. (TTS) for Oracle Database
[1461278.2]

DETAILS 19c Database Upgrade - Self


Guided Assistance with Best
Practices [1919.2]
A. Requirement Show More

o Network ping with package size of Network Adapter (NIC) MTU should work on all public and private network and the time of
ping should be small (sub-second). Document References
Troubleshoot Grid
o IP address 127.0.0.1 should only map to localhost and/or localhost.localdomain, not anything else. Infrastructure/RAC Database
installer/runInstaller Issues
o 127.*.*.* should not be used by any network interface. [1056322.1]

o Public NIC name must be same on all nodes. Grid Infrastructure Startup
During Patching, Install or
Upgrade May Fail Due to
o Private NIC name should be same in 11gR2 and must be same for pre-11gR2 on all nodes Multicasting Requirement
[1212703.1]
o Public and private network must not be in link local subnet (169.254.*.*), should be in non-related separate subnet.
OSWatcher [301137.1]
o MTU should be the same for corresponding network on all nodes. Grid Infrastructure Redundant
Interconnect and
o Network size should be same for corresponding network on all nodes. ora.cluster_interconnect.haip
[1210883.1]

o As the private network needs to be directly attached, traceroute should work with a packet size of NIC MTU without Oracle Clusterware Cannot
fragmentation or going through the routing table on all private networks in 1 hop. Start on all Nodes: Network
communication with node
<NAME> missing for 90% of
o Firewall needs to be turned off on the private network. timeout interval [1507482.1]
o For 10.1 to 11.1, name resolution should work for the public, private and virtual names. Show More

Recently Viewed
o For 11.2 without Grid Naming Service (aka GNS), name resolution should work for all public, virtual, and SCAN names; and if
SCAN is configured in DNS, it should not be in local hosts file. Top 5 Issues That Cause
Node Reboots or Evictions or
o For 11.2.0.2 and above, multicast group 230.0.1.0 should work on private network; with patch 9974223, both group 230.0.1.0 Unexpected Recycle of CRS
and 224.0.0.251 are supported. With patch 10411721 (fixed in 11.2.0.3), broadcast will be supported. See Multicast/Broadcast [1367153.1]
section to verify. ORA-29740: RAC Instance
Crash / Eviction [1951228.1]
o For 11.2.0.1-11.2.0.3, Installer may report a warning if reverse lookup is not setup correctly for pubic IP, node VIP, and SCAN Troubleshooting gc block lost
VIP, with bug 9574976 fix in 11.2.0.4, the warning shouldn't be there any more. and Poor Network
Performance in a RAC
o OS level bonding is recommended for the private network for pre-11.2.0.2. Depending on the platform, you may implement Environment [563566.1]
bonding, teaming, Etherchannel, IPMP, MultiPrivNIC etc, please consult with your OS vendor for details. Started from 11.2.0.2, Recommendation for the
Real Application Cluster
Redundant Interconnect and HAIP is introduced to provide native support for multiple private network, refer to note 1210883.1
Interconnect and Jumbo
for details. Frames [341788.1]
Release Schedule of Current
o The commands verifies jumbo frames if it's configured. To know more about jumbo frames, refer to note 341788.1 Database Releases
[742060.1]
Show More
B. Example of what we expect

Example below shows what we expect while validating the network and name resolution setup. As the network setup is slightly
different for 11gR2 and 11gR1 or below, we have both case in the below example. The difference between 11gR1 or below and
11gR2 is for 11gR1, we need a public name, VIP name, private hostname, and we rely on the private name to find out the
private IP for cluster communication. For 11gR2, we do not rely on the private name anymore, rather the private network is
selected based on the GPnP profile while the clusterware comes up. Assuming a 3-node cluster with the following node
information:

11gR1 or below cluster:

Nodename |Public IP |VIP name |VIP |Private |private IP1 |private IP2

|NIC/MTU | | |Name1 |NIC/MTU |

---------|----------|---------|-----------|--------|----------------------

<node1> |120.X.X.1 |<node1>v |120.X.X.11 |<node1>p |10.X.X.1 |

|eth0/1500 | | | |eth1/1500 |

---------|----------|---------|-----------|--------|----------------------

<node2> |120.X.X.2 |<node2>v |120.xxx.xxx.12 |<node2>p |10.X.X.2 |

|eth0/1500 | | | |eth1/1500 |

---------|----------|---------|-----------|--------|----------------------

<node3> |120.X.X.3 |<node3>v |120.X.X.13 |<node3>p |10.X.X.3 |

|eth0/1500 | | | |eth1/1500 |

---------|----------|---------|-----------|--------|----------------------

11gR2 cluster

Nodename |Public IP |VIP name |VIP |private IP1 |

|NIC/MTU | | |NIC/MTU |

---------|----------|---------|-----------|------------|----------

<node1> |120.X.X.1 |<node1>v |120.X.X.11 |10.X.X.1 |

|eth0/1500 | | |eth1/1500 |

---------|----------|---------|-----------|------------|----------

<node2> |120.X.X.2 |<node2>v |120.X.X.12 |10.X.X.2 |

|eth0/1500 | | |eth1/1500 |

---------|----------|---------|-----------|------------|----------

<node3> |120.X.X.3 |<node3>v |120.X.X.13 |10.X.X.3 |

|eth0/1500 | | |eth1/1500 |

---------|----------|---------|-----------|------------|----------

SCAN name |SCAN IP1 |SCAN IP2 |SCAN IP3

----------|-----------|-----------|--------------------

<scanname> |120.X.X.21 |120.X.X.22 |120.X.X.23

----------|-----------|-----------|--------------------
Below is what is needed to be verify on each node - please note the example is from a Linux platform:

1. To find out the MTU

/bin/netstat -in
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 203273 0 0 0 2727 0 0 0 BMRU

In above example MTU is set to 1500 for eth0.

2. To find out the IP address and subnet, compare Broadcast and Netmask on all nodes

/sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:11:11:11
inet addr:120.X.X.1 Bcast:120.xxx.xxx.127 Mask:255.xxx.xxx.128
inet6 addr: fe80::216:3eff:fe11:1111/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:203245 errors:0 dropped:0 overruns:0 frame:0
TX packets:2681 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:63889908 (60.9 MiB) TX bytes:319837 (312.3 KiB)
..

In the above example, the IP address for eth0 is 120.X.X.1, broadcast is 120.X.X.127, and net mask is 255.xxx.xxx.128, which is
subnet of 120.xxx.xxx.0 with a maximum of 126 IP addresses. Refer to Section "Basics of Subnet" for more details.

Note: An active NIC must have both "UP" and "RUNNING" flag; on Solaris, "PHYSRUNNING" will indicate whether the physical
interface is running

3. Run all ping commands twice to make sure result is consistent

Below is an example ping output from node1 public IP to node2 public hostname:

PING <nodename2> (120.X.X.2) from 120.X.X.1 : 1500(1528) bytes of data.


1508 bytes from <nodename2> (120.X.X.2): icmp_seq=1 ttl=64 time=0.742 ms
1508 bytes from rac1 (120.xxx.xxx.2): icmp_seq=2 ttl=64 time=0.415 ms

--- rac2 ping statistics ---


2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.415/0.578/0.742/0.165 ms

Please pay attention to the packet loss and time. If it is not 0% packet loss, or if it is not sub-second time, then it indicates there
is a problem in the network. Please engage network administrator to check further.

3.1 Ping all public nodenames from the local public IP with packet size of MTU

/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename1>


/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename1>
/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename2>
/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename2>
/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename3>
/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <nodename3>

3.2.1 Ping all private IP(s) from all local private IP(s) with packet size of MTU
applies to 11gR2 example, private name is optional

/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.1


/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.1
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.2
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.2
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.3
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 10.xxx.xxx.3

3.2.2 Ping all private nodename from local private IP with packet size of MTU
applies to 11gR1 and earlier example

/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac1p


/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac1p
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac2p
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac2p
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac3p
/bin/ping -s <MTU> -c 2 -I 10.xxx.xxx.1 rac3p
4. Traceroute private network

Example below shows traceroute from node1 private IP to node2 private hostname

# Packet size of MTU - on Linux packet length needs to be MTU - 28 bytes otherwise error send: Message too
long is reported.
# For example with MTU value of 1500 we would use 1472 :

traceroute to <nodename2>p (10.xxx.xxx.2), 30 hops max, 1472 byte packets


1 rac2p (10.xxx.xxx.2) 0.626 ms 0.567 ms 0.529 ms

MTU size packet traceroute complete in 1 hop without going through the routing table. Output other than above indicates
issue, i.e. when "*" or "!H" presents.

Note: traceroute option "-F" may not work on RHEL3/4 OEL4 due to OS bug, refer to note: 752844.1 for details.

4.1 Traceroute all private IP(s) from all local private IP(s) with :
applies to 11gR2 onwards

/bin/traceroute -s 10.xxx.xxx.1 -r -F 10.xxx.xxx.1 <MTU-28>


/bin/traceroute -s 10.xxx.xxx.1 -r -F 10.xxx.xxx.2 <MTU-28>
/bin/traceroute -s 10.xxx.xxx.1 -r -F 10.xxx.xxx.3 <MTU-28>

If "-F" option does not work, then traceroute without the "-F" parameter but with packet that's triple the MTU size, i.e.:

/bin/traceroute -s 10.xxx.xxx.1 -r 10.xxx.xxx.1 <3 x MTU>

4.2 Traceroute all private nodename from local private IP with packet size of MTU
applies to 11gR1 and earlier example

/bin/traceroute -s 10.xxx.xxx.1 -r -F <nodename1>p <MTU-28>


/bin/traceroute -s 10.xxx.xxx.1 -r -F <nodename2>p <MTU-28>
/bin/traceroute -s 10.xxx.xxx.1 -r -F <nodename3>p <MTU-28>

If "-F" option does not work, then run traceroute without the "-F" parameter but with packet that's triple MTU size, i.e.:

/bin/traceroute -s 10.xxx.xxx.1 -r <nodename1>p <3 x MTU>

5. Ping VIP hostname


# Ping of all VIP nodename should resolve to correct IP
# Before the clusterware is installed, ping should be able to resolve VIP nodename but
# should fail as VIP is managed by the clusterware
# After the clusterware is up and running, ping should succeed

/bin/ping -c 2 <nodename1>v
/bin/ping -c 2 <nodename1>v
/bin/ping -c 2 <nodename2>v
/bin/ping -c 2 <nodename2>v
/bin/ping -c 2 <nodename3>v
/bin/ping -c 2 <nodename3>v

6. Ping SCAN name


# applies to 11gR2
# Ping of SCAN name should resolve to correct IP
# Before the clusterware is installed, ping should be able to resolve SCAN name but
# should fail as SCAN VIP is managed by the clusterware
# After the clusterware is up and running, ping should succeed

/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <scanname>


/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <scanname>
/bin/ping -s <MTU> -c 2 -I 120.xxx.xxx.1 <scanname>

7. Nslookup VIP hostname and SCAN name


# applies to 11gR2
# To check whether VIP nodename and SCAN name are setup properly in DNS

/usr/bin/nslookup <nodename1>v
/usr/bin/nslookup <nodename2>v
/usr/bin/nslookup <nodename3>v
/usr/bin/nslookup <scanname>

8. To check name resolution order


# /etc/nsswitch.conf on Linux, Solaris and hp-ux, /etc/netsvc.conf on AIX

/bin/grep ^hosts /etc/nsswitch.conf


hosts: dns files

9. To check local hosts file


# If local files is in naming switch setting (nsswitch.conf), to make sure
# hosts file doesn't have typo or misconfiguration, grep all nodename and IP
# 127.0.0.1 should not map to SCAN name, public, private and VIP hostname

Public and node VIP:

/bin/grep <nodename1> /etc/hosts


/bin/grep <nodename2> /etc/hosts
/bin/grep <nodename3> /etc/hosts
/bin/grep <nodename1>v /etc/hosts
/bin/grep <nodename2>v /etc/hosts
/bin/grep <nodename3>v /etc/hosts
/bin/grep 120.X.X.1 /etc/hosts
/bin/grep 120.X.X.2 /etc/hosts
/bin/grep 120.X.X.3 /etc/hosts
/bin/grep 120.X.X.11 /etc/hosts
/bin/grep 120.X.X.12 /etc/hosts
/bin/grep 120.X.X.13 /etc/hosts

# pre-11gR2 private example


/bin/grep <nodename1>p /etc/hosts
/bin/grep <nodename2>p /etc/hosts
/bin/grep <nodename3>p /etc/hosts
/bin/grep 10.X.X.1 /etc/hosts
/bin/grep 10.X.X.2 /etc/hosts
/bin/grep 10.X.X.3 /etc/hosts

# 11gR2 private example


/bin/grep 10.X.X.1 /etc/hosts
/bin/grep 10.X.X.2 /etc/hosts
/bin/grep 10.X.X.3 /etc/hosts

# SCAN example
# If SCAN name is setup in DNS, it should not be in local hosts file
/bin/grep <scanname> /etc/hosts
/bin/grep 120.X.X.21 /etc/hosts
/bin/grep 120.X.X.22 /etc/hosts
/bin/grep 120.X.X.23 /etc/hosts

C. Syntax reference

Please refer to below for command syntax on different platform

Linux:
/bin/netstat -in
/sbin/ifconfig
/bin/ping -s <MTU> -c 2 -I source_IP nodename
/bin/traceroute -s source_IP -r -F nodename-priv <MTU-28>
/usr/bin/nslookup

Solaris:
/bin/netstat -in
/usr/sbin/ifconfig -a
/usr/sbin/ping -i source_IP -s nodename <MTU> 2
/usr/sbin/traceroute -s source_IP -r -F nodename-priv <MTU>
/usr/sbin/nslookup

HP-UX:
/usr/bin/netstat -in
/usr/sbin/ifconfig NIC
/usr/sbin/ping -i source_IP nodename <MTU> -n 2
/usr/contrib/bin/traceroute -s source_IP -r -F nodename-priv <MTU>
/bin/nslookup

AIX:
/bin/netstat -in
/usr/sbin/ifconfig -a
/usr/sbin/ping -S source_IP -s <MTU> -c 2 nodename
/bin/traceroute -s source_IP -r nodename-priv <MTU>
/bin/nslookup

Windows:
MTU:
Windows XP: netsh interface ip show interface
Windows Vista/7: netsh interface ipv4 show subinterfaces
ipconfig /all
ping -n 2 -l <MTU-28> -f nodename
tracert
nslookup

D. Multicast

Started with 11.2.0.2, multicast group 230.0.1.0 should work on private network for bootstrapping. patch 9974223 introduces
support for another group 224.0.0.251

Please refer to note 1212703.1 to verify whether multicast is working fine.

As fix for bug 10411721 is included in 11.2.0.3, broadcast is supported for bootstrapping as well as multicast. When 11.2.0.3
Grid Infrastructure starts up, it will try broadcast, multicast group 230.0.1.0 and 224.0.0.251 simultaneously, if anyone succeeds,
it will be able to start.

On hp-ux, if 10 Gigabit Ethernet is used as private network adapter, without driver revision B.11.31.1011 or later of the
10GigEthr-02 software bundle, multicast may not work. Run "swlist 10GigEthr-02" command to identify the current version on
your HP server.

E. Runtime network issues

OSWatcher or Cluster Health Monitor(IPD/OS) can be deployed to capture runtime network issues.

F. Symptoms of network issues

o ping doesn't work, ping packet loss or ping time is too long (not sub-second)

o traceroute doesn't work

o name resolution doesn't work

o traceroute output like:

1 racnode1 (192.168.30.2) 0.223 ms !X 0.201 ms !X 0.193 ms !X

o gipcd.log shows:

2010-11-21 13:00:44.455: [ GIPCNET][1252870464]gipcmodNetworkProcessConnect: [network] failed connect


attempt endp 0xc7c5590 [0000000000000356] { gipcEndpoint : localAddr 'gipc://<nodename3>:08b1-c475-
a88e-8387#10.XX.XX.23#27573', remoteAddr 'gipc://<nodename2>:nm_rac-cluster#192.168.0.22#26869',
numPend 0, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x80612,
usrFlags 0x0 }, req 0xc7c5310 [000000000000035f] { gipcConnectRequest : addr
'gipc://<nodename2>:nm_rac-cluster#192.168.0.22#26869', parentEn
2010-11-21 13:00:44.455: [ GIPCNET][1252870464]gipcmodNetworkProcessConnect: slos op : sgipcnTcpConnect
2010-11-21 13:00:44.455: [ GIPCNET][1252870464]gipcmodNetworkProcessConnect: slos dep : No route to
host (113)

or

2010-11-04 12:33:22.133: [ GIPCNET][2314] gipcmodNetworkProcessSend: slos op : sgipcnUdpSend


2010-11-04 12:33:22.133: [ GIPCNET][2314] gipcmodNetworkProcessSend: slos dep : Message too long (59)
2010-11-04 12:33:22.133: [ GIPCNET][2314] gipcmodNetworkProcessSend: slos loc : sendto
2010-11-04 12:33:22.133: [ GIPCNET][2314] gipcmodNetworkProcessSend: slos info : dwRet 4294967295,
addr '19

o ocssd.log shows:

2010-02-03 23:26:25.804: [GIPCXCPT][1206540320]gipcmodGipcPassInitializeNetwork: failed to find any


interfaces in clsinet, ret gipcretFail (1)
2010-02-03 23:26:25.804: [GIPCGMOD][1206540320]gipcmodGipcPassInitializeNetwork: EXCEPTION[ ret
gipcretFail (1) ] failed to determine host from clsinet, using default
..
2010-02-03 23:26:25.810: [ CSSD][1206540320]clsssclsnrsetup: gipcEndpoint failed, rc 39
2010-02-03 23:26:25.811: [ CSSD][1206540320]clssnmOpenGIPCEndp: failed to listen on gipc addr
gipc://rac1:nm_eotcs- ret 39
2010-02-03 23:26:25.811: [ CSSD][1206540320]clssscmain: failed to open gipc endp

or

2010-09-20 11:52:54.014: [ CSSD][1103055168]clssnmvDHBValidateNCopy: node 1, racnode1, has a disk


HB, but no network HB, DHB has rcfg 180441784, wrtcnt, 453, LATS 328297844, lastSeqNo 452, uniqueness
1284979488, timestamp 1284979973/329344894
2010-09-20 11:52:54.016: [ CSSD][1078421824]clssgmWaitOnEventValue: after CmInfo State val 3, eval
1 waited 0
.. >>>> after a long delay
2010-09-20 12:02:39.578: [ CSSD][1103055168]clssnmvDHBValidateNCopy: node 1, <nodename1>, has a disk
HB, but no network HB, DHB has rcfg 180441784, wrtcnt, 1037, LATS 328883434, lastSeqNo 1036, uniqueness
1284979488, timestamp 1284980558/329930254
2010-09-20 12:02:39.895: [ CSSD][1107286336]clssgmExecuteClientRequest: MAINT recvd from proc 2
(0xe1ad870)

o crsd.log shows:

2010-11-29 10:52:38.603: [GIPCHALO][2314] gipchaLowerProcessNode: no valid interfaces found to node for


2614824036 ms, node 111ea99b0 { host '<nodename1>', haName '1e0b-174e-37bc-a515', srcLuid 2612fa8e-
3db4fcb7, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [55 :
55], createTime 2614768983, flags 0x4 }
2010-11-29 10:52:42.299: [ CRSMAIN][515] Policy Engine is not initialized yet!
2010-11-29 10:52:43.554: [ OCRMAS][3342]proath_connect_master:1: could not yet connect to master
retval1 = 203, retval2 = 203
2010-11-29 10:52:43.554: [ OCRMAS][3342]th_master:110': Could not yet connect to new master [1]

or

2009-12-10 06:28:31.974: [ OCRMAS][20]proath_connect_master:1: could not connect to master clsc_ret1


= 9, clsc_ret2 = 9
2009-12-10 06:28:31.974: [ OCRMAS][20]th_master:11: Could not connect to the new master
2009-12-10 06:29:01.450: [ CRSMAIN][2] Policy Engine is not initialized yet!
2009-12-10 06:29:31.489: [ CRSMAIN][2] Policy Engine is not initialized yet!

or

2009-12-31 00:42:08.110: [ COMMCRS][10]clsc_receive: (102b03250) Error receiving, ns (12535, 12560),


transport (505, 145, 0)

o octssd.log shows:

2011-04-16 02:59:46.943: [ CTSS][1]clsu_get_private_ip_addresses: clsinet_GetNetData failed ().


Return [7]
[ CTSS][1](:ctss_init6:): Failed to call clsu_get_private_ip_addr [7]
gipcmodGipcPassInitializeNetwork: failed to find any interfaces in clsinet, ret gipcretFail (1)
gipcmodGipcPassInitializeNetwork: EXCEPTION[ ret gipcretFail (1) ] failed to determine host from
clsinet, using default
[ CRSCCL][2570585920]No private IP addresses found.
(:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 1 nodes with
leader 2, <nodename2>, is smaller than cohort of 1 nodes led by node 1, <nodename1>, based on map type
2

G. Basics of Subnet

Refer to note 1386709.1 for details

REFERENCES

NOTE:1056322.1 - Troubleshoot Grid Infrastructure/RAC Database installer/runInstaller Issues

NOTE:1212703.1 - Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement
NOTE:301137.1 - OSWatcher

NOTE:1210883.1 - Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip


NOTE:1507482.1 - Oracle Clusterware Cannot Start on all Nodes: Network communication with node missing for 90% of timeout
interval
NOTE:1386709.1 - The Basics of IPv4 Subnet and Oracle Clusterware
NOTE:752844.1 - RHEL3, RHEL4, OEL4: traceroute Fails with -F (do not fragment bit) Argument
NOTE:341788.1 - Recommendation for the Real Application Cluster Interconnect and Jumbo Frames
Didn't find what you are looking for? Ask in Community...

Related
Products

Oracle Cloud > Oracle Infrastructure Cloud > Oracle Cloud at Customer > Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine)
Oracle Cloud > Oracle Platform Cloud > Oracle Cloud Infrastructure - Database Service > Oracle Cloud Infrastructure - Database Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Backup Service > Oracle Database Backup Service
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database - Enterprise Edition > Clusterware
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Exadata Service > Oracle Database Cloud Exadata Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle Database Exadata Express Cloud Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle Database Cloud Service
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database - Standard Edition > Real Application Cluster
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle Database - Standard Edition > Clusterware > Clusterware Install
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle Database Cloud Schema Service

Keywords
ADDRESS; CLUVFY; CRS; FRAGMENTATION; GRID INFRASTRUCTURE; INFRASTRUCTURE; IP ADDRESS; RAC; VERIFICATION
Errors
ORA-12514

Translations
English Source Chinese 简体中文 Japanese 日本語 Korean 한국어

Back to Top
Copyright (c) 2025, Oracle. All rights reserved. Legal Notices and Terms of Use Privacy Statement

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy