0% found this document useful (0 votes)
175 views39 pages

ASM Best Practices

This document provides an overview of new features in Oracle Database 10g Release 2 Automatic Storage Management (ASM). It discusses ASM instances, disks, diskgroups, failure groups, rebalancing capabilities, ASM files, templates and aliases, ASM and Cluster Synchronization Services, and enhanced ASM manageability features. The document is intended to help administrators understand how to implement and manage ASM for database storage and file management.

Uploaded by

Alfredo Miranda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views39 pages

ASM Best Practices

This document provides an overview of new features in Oracle Database 10g Release 2 Automatic Storage Management (ASM). It discusses ASM instances, disks, diskgroups, failure groups, rebalancing capabilities, ASM files, templates and aliases, ASM and Cluster Synchronization Services, and enhanced ASM manageability features. The document is intended to help administrators understand how to implement and manage ASM for database storage and file management.

Uploaded by

Alfredo Miranda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Oracle Database 10g Release 2

Automatic Storage Management


Overview and Technical Best Practices

Nitin Vengurlekar
Oracle Corporation
OOW 2005

1
Table of Contents

Introduction ........................................................................................................................ 3
Overview of Oracle Database 10g Release 2 ASM New Features .................................................. 4
ASM installation .......................................................................................................... 6
ASM SGA and parameter sizing ............................................................................................. 6
ASM and privileges ...................................................................................................... 7
ASMLIB ............................................................................................................................ 7
Disks ................................................................................................................................. 8
ASM and Multipathing ................................................................................................. 9
DiskGroups ...................................................................................................................... 10
Diskgroups and databases ............................................................................................ 12
ASM redundancy and Failure Groups .................................................................................... 12
New Space related columns in Oracle Database 10g Release 2 ............................................ 14
Cluster Synchronization Services - CSS ................................................................................. 14
Database Instances ............................................................................................................. 15
Database consolidation and clustering ............................................................................ 15
ASM – Database Multi-Version Support ........................................................................ 16
Database processes to support ASM .............................................................................. 17
Database Init.ora parameters to support ASM .................................................................. 17
ASM and database shutdown dependencies ..................................................................... 18
ASM and database deployment Best Practices ................................................................. 18
Storage Management and Allocation...................................................................................... 19
Rebalance and Redistribution ............................................................................................... 19
Files and Aliases ................................................................................................................ 21
Enhanced management for ASM ........................................................................................... 24
ASM Command Line Interface (ASMCMD) ................................................................... 24
DBMS_FILE_TRANSFER Utility Enhancements ............................................................ 25
XML ASM Virtual Folder ........................................................................................... 26
ASM Virtual Folder access via FTP ........................................................................... 26
Templates ......................................................................................................................... 26
Simplified ASM migration using EM ..................................................................................... 27
Conclusion ....................................................................................................................... 28
Appendix A. Views and Discover Strings ............................................................................... 29
Appendix B. Migrating individual non-ASM datafiles to ASM ................................................... 30
Appendix C. Datapump filesets and ASM ............................................................................... 31
Appendix D. Creating extra controlfiles in ASM ...................................................................... 32
Appendix E. ASM SQL scripts ............................................................................................. 33
Appendix F. ASM IOSTAT utility – ASMIOSTAT .................................................................. 36

2
Introduction

In Oracle Database 10g Release 2, storage management and provisioning for the database has been greatly
simplified using Automatic Storage Management (ASM). ASM provides filesystem and volume manager
capabilities built into the Oracle database kernel. With this capability, ASM simplifies storage management
tasks, such as creating/laying out databases and diskspace management. Since ASM allows disk
management to be done using familiar create/alter/drop SQL statements, DBAs do not need to learn a new
skill set or make crucial decisions on provisioning. An Enterprise Manager interface as well new command
line utility (new in Oracle Database 10g Release 2) as is also available for those ASM administrators who
are not familiar with SQL.

ASM is a management tool specifically built to simplify the job of the DBA. It provides a simple storage
management interface across all server and storage platforms. ASM provides the DBA flexibility to
manage a dynamic database environment with increased efficiency. This feature is a key component of
Grid Computing and Database Storage Consolidation.

The following are some key benefits of ASM:

o I/O is spread evenly across all available disk drives to prevent hot spots and maximize
performance.
o ASM eliminates the need for over provisioning and maximizes storage resource utilization
facilitating database consolidation.
o Inherent large file support.
o Performs automatic online redistribution after the incremental addition or removal of storage
capacity.
o Maintains redundant copies of data to provide high availability, or leverage 3rd party RAID
functionality.
o Supports Oracle Database 10g as well as Oracle Real Application Clusters (RAC).
o Capable of leveraging 3rd party multipathing technologies.
o For simplicity and easier migration to ASM, an Oracle Database 10g Release 2 database can
contain ASM and non-ASM files. Any new files can be created as ASM files whilst existing files
can also be migrated to ASM.
o RMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
o Oracle Database 10g Enterprise Manager can be used to manage ASM disk and file management
activities.
o ASM reduces Oracle Database 10g cost and complexity without compromising performance or
availability.

3
This document will discuss the essentials of ASM, and cover the steps to discover disks, create a diskgroup,
and eventually create a database within ASM diskgroup, discussing some best practices along the way.

The following components of ASM will be discussed:


o Instances – ASM instance and database instance
o ASM Disks
o Diskgroups
o Failure Groups
o Rebalancing Capabilities
o ASM Files, Templates and Aliases
o ASM and Cluster Synchronization Services (CSS)

Overview of Oracle Database 10g Release 2 ASM New Features

Automatic Storage Management in Oracle Database 10g Release 2 expands the capabilities of ASM and
provides features and enhancements in the following categories:

o Database consolidation and clustering


o Database storage consolidation with Single Instance and RAC
o Multiple database version support
o Independent ASM Oracle home
o Easy database migration to ASM
o Enterprise Manger (EM) ASM migration utility
o Management flexibility and simplicity
o Command Line Interface (CLI)
o ASM file system view through XMLDB virtual folders
o Enhanced ASM manageability and alerts
o Enhancements in DBMS_FILE_TRANSFER utility

This paper will introduce the new features of Automatic Storage Management (ASM) in Oracle Database
10g Release 2, and describe the benefits and improvements in simplification and automation for managing
Oracle database files.

See the following document for a high level overview of all the new Oracle Database 10g Release 2 ASM
Features:
ASM 10gR2 New Features – http://asm.us.oracle.com/pdf/ASM R2 New Features.pdf

4
ASM Instances

In Oracle Database 10g there are two types of instances: database and ASM instances. The ASM instance,
which is generally named +ASM1, is started with the INSTANCE_TYPE=ASM init.ora parameter. This
parameter, when set, signals the Oracle initialization routine to start an ASM instance and not a standard
database instance. Unlike the standard database instance, the ASM instance contains no physical files; such
as logfiles, controlfiles or datafiles, and only requires a few init.ora parameters for startup2.

Upon startup, an ASM instance will spawn all the basic background processes, plus some new ones that are
specific to the operation of ASM. The STARTUP clauses for ASM instances are similar to those for
database instances. For example, RESTRICT prevents database instances from connecting to this ASM
instance. NOMOUNT starts up an ASM instance without mounting any disk group. MOUNT option
simply mounts all defined diskgroups3.

The illustration in Figure 1 shows the basic init.ora parameters required to start ASM. Observe that all
ASM processes begin with asm, as opposed to the database instance, whose processes begin with ora.

ASM is the volume manager for all databases that employ ASM on a given node. Therefore, only one
ASM instance is required per node regardless of the number of database instances on the node.
Additionally, ASM seamlessly works with the RAC architecture to support clustered storage environments.
In RAC environments, there will be one ASM instance per clustered node, and the ASM instances
communicate with each other on a peer-to-peer basis using the interconnect.

Figure 1.
Instance type
SQL> select instance_name from v$instance

INSTANCE_NAME
----------------
+ASM

ASM init.ora parameters


*.background_dump_dest='/opt/app/admin/+ASM/bdump'
*.core_dump_dest='/opt/app/admin/+ASM/cdump'
*.instance_type=asm
*.asm_diskgroups=+DATA
*.large_pool_size=12M
*.asm_diskstring='/dev/rdsk/c3t19d*s4'
*.remote_login_passwordfile='SHARED'
*.user_dump_dest='/opt/app/admin/+ASM/udump'

ASM background processes


oracle 2423 1 0 Apr30 ? 00:00:00 asm_pmon_+ASM1
oracle 2425 1 0 Apr30 ? 00:00:00 asm_diag_+ASM1
oracle 2434 1 0 Apr30 ? 00:00:00 asm_mman_+ASM1
oracle 2436 1 0 Apr30 ? 00:00:00 asm_dbw0_+ASM1
oracle 2438 1 0 Apr30 ? 00:00:00 asm_lgwr_+ASM1
oracle 2440 1 0 Apr30 ? 00:00:00 asm_ckpt_+ASM1
oracle 2442 1 0 Apr30 ? 00:00:00 asm_smon_+ASM1
oracle 2444 1 0 Apr30 ? 00:16:03 asm_rbal_+ASM1

1
For RAC configurations, the ASM SID is +ASMx instance, where x represents the instance number.
2
The ASM instance only requires approximately 100Mb memory footprint.
3
OPEN startup option performs the same function as MOUNT option; i.e., mount all asm_diskgroups.

5
ASM installation

In cases where a single ASM instance is managing only one database instance, it may be sufficient to
maintain a single ORACLE_HOME for ASM and the database.
However, for systems that have ASM instance managing the storage for several database instances and
require higher availability, it is recommended that the ASM instance be installed in a separate
ORACLE_HOME (ASM_HOME) than the database ORACLE_HOME4.

In Oracle Database 10g Release 2, Oracle Universal Installer (OUI) and Database Configuration Assistant
(DBCA) have been enhanced to allow the user to seamlessly create and install an ASM instance in a
separate ORACLE_HOME. OUI now has options to:

o Install and configure a database that uses ASM for storage management
o Install and configure an ASM instance, without creating a database.
o Install and configure ASM on a system that already has a running database, where subsequently,
the DBA can use the EM Migration Utility to migrate the database to ASM.

Additionally, in Oracle Database 10g Release 2, ASM transparently supports both older and newer
software versions of the database. See ASM – Database Multi-Version Support section for more details.
This new feature provides higher availability as well as the foundation for the Database Storage
Consolidation Feature. For more information on Database Storage Clustering see the following section:
Database consolidation and clustering

ASM SGA and parameter sizing


Enabling the ASM instance requires configuring only a handful of init.ora parameters. The parameter file
for ASM can be a Pfile or Spfile. DBCA will create an ASM Pfile by default during database/ASM
creation. The Spfile can be manually configured later if desired. However, if an Spfile is used in clustered
ASM environments, then it must on a shared raw device. The init.ora parameters specified in Figure 1 are
the essential parameters required to start up ASM5.
The following describes the usage of SGA components.

o db_cache_size - This value determines the size of the cache. This buffer cache area is used to
cache metadata blocks. The default value will suit most all implementations.

o shared_pool - Used for standard memory usage (control structures, etc.) to manage the
instance. Also used to store extent maps. The default will value suit most all implementations.

o large_pool - Used for large allocations. The default values suit most all implementations.

The “processes” init.ora parameter for ASM may need to be modified. The following recommendation
pertains to versions 10.1.0.3 and later of Oracle, and will work for RAC and non-RAC systems. The
formula below can used to determine an optimal value for this parameter:
25 + 15n, where n is the number of databases using ASM for their storage.

4
If a separate ASM_HOME is created with a different user id than the database ORACLE_HOME, then
the ASM_HOME owner needs to be in the dba group. Also, the read/write permissions for the ASM disks
need to be appropriately set to be the same as ASM_HOME group id.
5
ASM currently does employ the Oracle Database 10g Automatic Memory Management feature.
However, since ASM instances have static memory needs, this is not an issue.

6
ASM and privileges

Access to the ASM instance is comparable to a standard instance; i.e., SYSDBA and SYSOPER. Note
however, since there is no data dictionary, authentication is done from an Operating System level and/or an
Oracle password file. Typically, the SYSDBA privilege is granted through the use of an operating system
group. On Unix, this is typically the dba group. By default, members of the dba group have SYSDBA
privileges on all instances on the node, including the ASM instance6. Users who connect to the ASM
instance with the SYSDBA privilege have complete administrative access to all disk groups in the system.
The SYSOPER privilege is supported in ASM instances and limits the set of allowable SQL commands to
the minimum required for basic operation of an already-configured system. The following commands are
available to SYSOPER users:
•STARTUP/SHUTDOWN
•ALTER DISKGROUP MOUNT/DISMOUNT
•ALTER DISKGROUP ONLINE/OFFLINE DISK
•ALTER DISKGROUP REBALANCE
•ALTER DISKGROUP CHECK
•Access to all V$ASM_* views

All other commands, such as CREATE DISKGROUP, ADD/DROP/RESIZE DISK, and so on, require the
SYSDBA privilege and are not allowed with the SYSOPER privilege.

ASMLIB

Oracle has developed a storage management interface called the ASMLIB API. ASMLIB is not required to
run ASM; it is simply an add-on module that simplifies the management and discovery of ASM disks. The
ASMLIB provides an alternative, to the standard operating system interface, for ASM to identify and
access block devices. The ASMLIB API provides storage and operating system vendors the opportunity to
supply extended storage-related features and exploit the capabilities and strengths of vendors’ storage
array.

The ASMLIB API provides two major feature enhancements over standard interfaces:
• Disk discovery – Providing more information about the storage attributes to the Database and
the DBA
• I/O processing – To enable more efficient I/O

Oracle is providing an ASMLIB, called the Oracle ASMLIB, for the Linux platform (current version as of
this writing is 2.0). This implementation of ASMLIB makes it easier to provision storage for a clustered
pool of storage. Providing a kernel level driver access to disk means that permissions and device mapping
are more stable upon the reboot of a server.
ASMLIB also makes the discovery process simpler to manage and removes the challenge of having disks
added to one node and not be known to other nodes in the cluster.

The Oracle ASMLIB for Linux is available and can be down loaded from OTN. See the following URL
for more information on ASMLib configuration and support matrix.
http://www.oracle.com/technology/tech/linux/asmlib/index.html

6
On Windows systems, the sqlnet.ora must specify NTS.

7
Disks

The first task in building the ASM infrastructure is to discover and associate (adding) disks under ASM
management. This step is best done with the coordination of Storage and Systems administrators. The
Storage administrator will identify a set of disks from the storage array that will be presented to the host.
The term disk may be used in loose terms. A disk can be partition of a physical spindle, the entire spindle
or a RAID group set (defined in the storage array), this depends on how the storage array presents the
logical unit number (LUN) to the Operating System (OS). In this document we will refer generically to
LUNs or disks presented to the OS as simply, disks.

On Solaris systems, disks will generally have the following SCSI name format: CwTxDySz. Where the C is
controller number, T is the target, D is the LUN/disk number, and S is the partition. Note that each OS will
have its unique representation of SCSI disk naming.

ASM must use only character devices as disks and not block devices. On most Unix systems character
devices are shown as /dev/rdsk or /dev/raw/raw on Linux. The only exception to this case is when ASM
uses NAS filesystem file as disks.

For example, it is a best practice on a Solaris system, to create a partition on the disk; such as slice 4 or 6,
that skips the first 1Mb into the disk. Creating partition serves several purposes. A partition creates a
placeholder to identify that the disk is being used. An un-partitioned disk could be accidentally misused or
overwritten. Skipping 1Mb into the disk is done to skip the OS label/VTOC, as well as to preserve
alignment between ASM stripping and storage array internal stripping. Different Operating Systems will
have varying requirements for the OS label; i.e., some may require an OS label before it is used while
others will not. The same principles apply for different Operating systems although the procedures may be
different.

In SAN environments, it assumed that the disks are identified and configured; i.e., they are appropriately
zoned or LUN masked within the SAN fabric and can be seen by the OS. Once the disks are identified,
they will need to be discovered by ASM. This requires that the disk devices (Unix filenames) have their
ownership changed from root to oracle7.

Throughout this document a running example consisting of diskgroup DATA will be used to procedurally
outline the steps for creating a diskgroup and other ASM objects.

In our example, disks c3t19d5s4, c3t19d16s4, c3t19d17s4, c3t19d18s4 are identified, and their ownership
set to the correct “oracle:dba” ownership.

Now, these disks can be defined in the init.ora parameter, asm_diskstring. In our example, we will use the
following wildcard setting:
*.asm_diskstring='/dev/rdsk/c3t19d*s4'.

When ASM scans for disks, it will use that string, and find any devices that it has permissions to open.
Upon successful discovery, the V$ASM_DISK view on the ASM instance, will now reflect which disks
were discovered. Note, henceforth, all views, unless otherwise stated, are examined from the ASM
instance and not from the database instance.

In Figure 2 below, notice that the NAME column is empty and the group_number is set to 0. Disks that are
discovered, but not yet associated with a diskgroup have a null name, and a group number of 0

7
Although logical volumes can be presented to ASM as disks, it is generally not a good practice, since an
extra layer is added to IO code path, and thus some level of overhead.

8
Figure 2.
SQL> select name, path, group_number from v$asm_disk

NAME PATH GROUP_NUMBER


------------------------------ --------------- ------------
/dev/rdsk/c3t19d5s4 0
/dev/rdsk/c3t19d16s4 0
/dev/rdsk/c3t19d17s4 0
/dev/rdsk/c3t19d18s4 0

Disks have various header statuses that reflect its membership state with a diskgroup. Disks can have the
following header statuses:
o Former - This state declares that the disk was formerly part of a diskgroup
o Candidate - When a disk is in this state, it indicates that it is available to be
added to a diskgroup.
o Member - This state indicates that a disk is already part of a diskgroup.
o Provisioned - This state is similar to candidate, in that its available to diskgroups.
However, the provisioned state indicates that this disk has been
configured or made available using ASMLIB.

There may be cases in RAC environments where disk paths are not identical on all nodes. For example,
node1 will have /dev/rdsk/c3t1d4s4 that points to a disk, and node2 will present /dev/rdsk/c4t1d4s4 for the
same device. This is typical and should not be considered an issue. ASM does not require the disks have
the same names on every node. However, ASM does require that the same disks are visible to each ASM
instance via that instances'discovery string. The instances can have different discovery strings if necessary.

ASM and Multipathing

An I/O path generally consists of an initiator port, fabric port, target port, and LUN. Each permutation of
this I/O path is considered an independent path. Dynamic Multi-pathing/failover tools aggregate these
independent paths into a single logical path. This path abstraction provides I/O load balancing across the
host bus adapters (HBA), as well as non-disruptive failovers on I/O path failures. Multi-pathing (MP)
software requires all the required disks to be visible on each available and eligible HBA. A MP driver will
detect multi-paths by performing a SCSI inquiry command8.

Multi-pathing software also provides multi-path software drivers. Most multi-pathing drivers support
multipath services for fibre channel attached SCSI-3 devices. These drivers receive naming and transport
services from one or more physical HBA devices. To support multi-pathing, a physical HBA driver must
comply with the multi-pathing services provided by this driver. Multipathing tools provides the following
benefits:
o Provide a single block device interface for a multi-pathed LUN
o Detect any component failures in the I/O path; e.g., fabric port, channel adapter, or HBA.
o When a loss of path occurs, ensure that I/Os are re-routed to the available paths, with no process
disruption.
o Reconfigure the multipaths automatically when events occur.
o Ensure that failed paths get revalidated as soon as possible and provide auto-failback capabilities.
o Configure the multi-paths to maximize performance using various load balancing methods; e.g.,
round robin, least I/Os queued, or least service time.

8
Probes for the World Wide Device Identifiers (WWD ID).

9
Examples of multi-pathing software include EMC PowerPath, Veritas DMP, Sun Traffic Manager, Hitachi
HDLM, and IBM SDDPCM. Linux 2.6 has a kernel based multipathing driver called Device Mapper.
Additionally, some HBA vendors also provide multipathing solutions.
Oracle Corporation does not certify or qualify these multipathing tools. Although ASM does not provide
multi-pathing capabilities, ASM does leverage multi-pathing tools, as long the path or device produced by
the Multi-pathing tool returns a successful return code from an fstat system call. Metalink Note 294869.1
provides more details on ASM and Multipathing.

DiskGroups

Once the disks are discovered, a diskgroup can be created that will encapsulate one or more of these disks.
A diskgroup, which is the highest-level data structure in ASM, is comparable to a LVM’s volume group.
However, there are several differentiators between typical LVM volume groups and ASM diskgroups:

o An ASM filesystem layer is implicitly created within a diskgroup. This filesystem is transparent
to users and only accessible through ASM, interfacing databases, and 10.2 ASM command line
tool.

o There are inherent automatic file-level striping and mirroring capabilities. A database file created
within an ASM diskgroup will have its file extents (not to be confused with its database extents)
distributed equally across all online disks in the diskgroup, which provides an even IO load.

The creation of a diskgroup involves the validation of the disks to be added. These disks must have the
following attributes:

o Cannot already be in use by another diskgroup


o Must not have a pre-existing ASM header
o Cannot have an Oracle file header.

This check and validation prevents ASM from destroying any in use data device. Disks with a valid header
status, which include candidate, former, or provisioned, are the only ones allowed to be diskgroup
members9.

The disk group is created using SQL commands. See Figures 4a, 4b and 5 below.
Figure 4a.
SQL> create diskgroup DATA external redundancy disk
'/dev/rdsk/c3t19d5s4',
'/dev/rdsk/c3t19d16s4',
'/dev/rdsk/c3t19d17s4',
'/dev/rdsk/c3t19d18s4';

The output below, from V$ASM_DISGROUP, shows the newly created diskgroup.
Figure 4b.
SQL> select name, state, type, total_mb, free_mb from v$asm_diskgroup;

NAME STATE TYPE TOTAL_MB FREE_MB


------------------------------ ----------- ------ ---------- ----------
DATA MOUNTED EXTERN 34512 34101

9
Disks with a status of member, require the use force option when adding to a diskgroup. Use the force
option with extreme caution.

10
In the example above, a DATA diskgroup is created using four disks, which reside in a storage array, with
the redundancy being handled externally via the storage array.
Figure5 shows how the V$ASM_DISK view reflects the disk state change after being incorporated into the
diskgroup.
Figure 5.
SQL> select name, path, mode_status, state, disk_number from v$asm_disk

NAME PATH MODE_ST STATE DISK_NUMBER


------------ ---------------------- ------- -------- -----------
DATA_0000 /dev/rdsk/c3t19d5s4 ONLINE NORMAL 0
DATA_0001 /dev/rdsk/c3t19d16s4 ONLINE NORMAL 1
DATA_0002 /dev/rdsk/c3t19d17s4 ONLINE NORMAL 2
DATA_0003 /dev/rdsk/c3t19d18s4 ONLINE NORMAL 3

ASM will generate and assign a disk name with a sequence number to each disk added to the diskgroup10.
The Disk Name is used when performing any disk management activities. The ASM disk name is different
from the SCSI address. This allows for consistent naming across RAC nodes as well as protects from SCSI
address name changes due to array re-configurations. In 10.2, disk names are now unique only within
diskgroups, whereas, in 10.1 disk names were unique to an ASM instance (or clustered ASM instances in
RAC).

After the diskgroup is successfully created, metadata information, which includes creation date, diskgroup
name, and redundancy type, is stored in the SGA and on each disk (in disk header) within the DATA
diskgroup. The V$ASM_DISK view will now reflect this disk header information. Once these disks are
under ASM management, all subsequent mounts of the diskgroup will cause ASM to re-read and validate
the ASM disk headers.

When mounting diskgroups, either at ASM startup or for subsequent mounts, it is advisable to mount all
required diskgroups at once. This is done to minimize the overhead of multiple ASM disk discovery scans.

When a diskgroup becomes mounted, ASM registers the diskgroup name, the instance name, and the
corresponding Oracle Home path name with Cluster Synchronization Services (CSS). This registered data
is then used by the database instance to build the TNS connect string. This string is subsequently used by
the database instance to connect into the ASM instance for volume management activities. This is
discussed in the Database Instance Section.

The ASM instance also houses I/O statistics in the V$ASM_DISK and V$ASM_DISK_STAT views. This
view includes reads/writes processed, read/write blocks handled, and read/write errors incurred. An ASM
based iostat like utility, called asmiostat, can be used to show I/O statistics for ASM based disks.
Appendix F contains the code for this utility. This script can be copied into a file and executed against any
10.2 ASM instance.

In 10.1, querying V$ASM_DISK and V$ASM_DISKGROUP was an expensive operation because each
execution involved a disk discovery. To minimize overhead, and allow lightweight access to this dataset,
10.2 introduces two new views: V$ASM_DISK_STAT and V$ASM_DISKGROUP_STAT. These two
views are identical to V$ASM_DISK and V$ASM_DISKGROUP, however, V$ASM_DISK_STAT and
V$ASM_DISKGROUP_STAT views are polled from memory and therefore do not require disk discovery.
Since these new views provide efficient lightweight access, EM can periodically query performance
statistics at disk level and aggregate space usage statistics at diskgroup level, without incurring significant
overhead. Note, to get accurate real-time statistics, it maybe be prudent to query the V$ASM_DISK and
V$ASM_DISKGROUP views, but, caution should be exercised when running queries against these views
during peak workloads.

10
A generic disk name is generated only if the disk name is not provided in the ‘create diskgroup’
statement.

11
Diskgroups and databases

A diskgroup can contain files from many different 10g Oracle databases, and these databases can be either
from the same server or can reside on different servers. However, a disk and a database file can only be
part of one diskgroup. Alternatively, one Oracle database may also store its files in multiple disk groups
managed by the same ASM instance. Allowing multiple databases to share a diskgroup provides greater
potential for improved disk utilization and greater overall throughput.

To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that generally no
more than two diskgroups be maintained and managed per RAC cluster or single ASM instance11.

o Database area: This is where active database files such as datafiles, control files, online redo logs,
and change tracking files used in incremental backups are stored
o Flash recovery area: Where recovery-related files are created, such as multiplexed copies of the
current control file and online redo logs, archived redo logs, backup sets, and flashback log files

To provide higher availability for the database, when a Flash Recovery Area is chosen at database creation
time, an active copy of the controlfile and one member set of the redo log group is stored in the Flash
Recovery Area. Note, additional copies of the controlfile or extra logfiles can be created and placed in
either diskgroup as desired. Please refer to the following documentation for a detailed discussion on the
usage and best practices of the Flash Recovery Area:

http://otn.oracle.com/deploy/availability/pdf/40104_Bednar_doc.pdf

ASM redundancy and Failure Groups

For systems that do not use external redundancy, ASM provides its own internal redundancy mechanism
and additional high availability by way of failure groups. A failure group, which is a subset of a diskgroup,
by definition is a collection of disks that can become unavailable due to a failure of one of its associated
components; e.g., controllers or entire arrays. Thus, disks in two separate failure groups (for a given
diskgroup) must not share a common failure component. If you define failure groups for your disk group,
ASM can tolerate the simultaneous failure of multiple disks in a single failure group

ASM uses a unique mirroring algorithm. ASM does not mirror disks; rather, it mirrors extents. As a result,
to provide continued protection in event of failure, only spare capacity in your disk group is needed rather
than having to provision a hot spare disk. It is not advisable to create failure groups of different sizes, as
this will create problems when allocating secondary extents. When ASM allocates a primary extent of a
file to one disk in a disk group, it allocates a mirror copy of that extent to another disk in the disk group.
Primary extents on a given disk will have their respective mirror extents on one of several partner disks in
the disk group. ASM ensures that a primary extent and its mirror copy never reside in the same failure
group.

Redundancy for disk groups can be either Normal redundancy (the default); where files are two-way
mirrored (requiring at least two failure groups), or High redundancy, which provides a higher degree of
protection using three-way mirroring (requiring at least three failure groups). Once a disk group has been
created its redundancy level cannot be changed. To change the redundancy of a diskgroup, another
diskgroup must be created with the appropriate redundancy desired, then move the datafiles (using RMAN
restore or DBMS_FILE_TRANSFER) to this newly created diskgroup.

11
Additional diskgroups may be added to support tiered storage classes in Information Lifecycle
Management (ILM) or Hierarchical Storage Management (HSM) deployments

12
Figure 6 shows an example of creating a disk group using failure groups. In this example, ASM normal
redundancy is being deployed over a storage array that has four internal trays, with each tray having four
disks. The failing component to isolate is the storage tray, thus the failure group boundary is the storage
tray; i.e., each storage tray will be associated with a failure group. Figure 6.

SQL> create diskgroup DATA_NRML normal redundancy

Failure group flgrp1 disk


‘/dev/rdsk/c3t19d3s4’,‘/dev/rdsk/c3t19d4s4’,‘/dev/rdsk/c3t19d5s4’,
‘/dev/rdsk/c3t19d6s4’

Failure group flgrp2 disk


‘/dev/rdsk/c4t20d3s4’,‘/dev/rdsk/c4t20d4s4’,‘/dev/rdsk/c4t20d5s4’,
‘/dev/rdsk/c4t19ds4’

Failure group flgrp3 disk


/dev/rdsk/c5t21d3s4’,‘/dev/rdsk/c5t21d4s4’,‘/dev/rdsk/c5t21d5s4’,
‘/dev/rdsk/c5t21ds4’

Failure group flgrp4 disk


/dev/rdsk/c6t22d3s4’,‘/dev/rdsk/c6t22d4s4’,‘/dev/rdsk/c6t22d5s4’,
‘/dev/rdsk/c6t22ds4’;

Diskgroups created with normal or high redundancy contain files that are double or triple mirrored;
respectively12. The first file extent allocated is chosen as primary extent, and the mirrored extent is called
the secondary extent. In the case of high redundancy there will two secondary extents. This logical
grouping of primary and secondary extents is called an extent set. An extent set always contains the exact
same data, since they are mirrored versions of each other. Thus when a block is written to a file, each
extent in the extent set is written in parallel. However, when a block is read from disk, it is always read
from the primary extent, unless the primary extent cannot be read. Keep in mind that each disk in a
diskgroup (and failure groups) contains nearly the same number of primary and secondary extents. This
provides an even distribution of read IO activity across all the disks. This is different than most logical
volume managers, which have a primary and mirrored disk set.

As stated above, failure groups are used to isolate component failures, and thus failing components needs to
be understood. If this failing component cannot be determined or the failing component is the disk itself
(as opposed to the array controller or storage tray), then it may be advisable to not specify any failure
groups when defining normal or high redundancy diskgroups. This results in every ASM disk being in its
own failure group, with ASM internally determining the disk mirroring partnership.
If the storage array has fully redundant components with dual paths to the disk, it is best to not to specify
failure groups and let ASM manage the disk mirroring partnerships.

Using the example in Figure 6, in the event of a disk failure in failure group flgrp1, which will induce a
rebalance, the contents (data extents) of the failed disk are reconstructed using the redundant copies of the
extents from the partnered disk. This partnered disk can be either from failure group flgrp2 or flgrp3. Let’s
say that the partnered disk is c5t21d3s4 from failure group 3. During the rebalance, if the database instance
needs to access an extent whose primary extent was on the failed disk, then the database will read the
mirror copy from the appropriate disk in failure group flgrp3. Once the rebalance is complete, and the disk
contents fully reconstructed, the database instance will return to reading primary copies only.

Since disk drives are mechanical devices, they have a tendency to fail. As drives begin to fail or have
sporadic I/O errors, the probability for database corruptions becomes more likely. ASM takes proactive
measures with regards to I/O errors. This is done irrespective of using Failure Groups. A permanent I/O
error is only signaled to the caller (Oracle IO process) after several retries in the device driver. If a

12
Diskgroup metadata is always triple mirrored with normal; or high redundancy.

13
permanent disk IO error is incurred during an Oracle write operation, then the affected disk is removed
from the diskgroup by ASM, thus preventing more application failures13. If the loss of a disk results in data
loss, ASM will automatically dismount the diskgroup to protect the integrity of the diskgroup data.

New Space related columns in Oracle Database 10g Release 2

There are two new columns introduced in Oracle Database 10g Release 2 that provide more accurate
information on free space usage.

o USABLE_FREE_SPACE - In 10.1, the FREE_MB value that is reported in


V$ASM_DISKGROUP does not take into account mirroring14. 10.2, introduces a new column in
V$ASM_DISKGROUP called USABLE_FREE_SPACE to indicate the amount of free space that
can be "safely" utilized taking mirroring into account. The column provides a more accurate view
of usable space in the diskgroup.

o REQUIRED_MB_FREE - Along with usable_free_space, a new column has been added to


V$ASM_DISKGROUP to more accurately indicate the amount of space that is required to be
available in a given diskgroup in order to restore redundancy after one or more disk failures. The
amount of space displayed in this column takes mirroring into account.

Cluster Synchronization Services - CSS

ASM was designed to work with single instance as well as with RAC 10g clusters. ASM, even in single-
instance, requires that Cluster Synchronization Services (CSS) is installed and started before ASM becomes
available. In a single instance, CSS maintains synchronization between the ASM and database instances.
CSS, which is a component of Oracle' s Cluster Ready Services (CRS), is automatically installed on every
node that runs Oracle Database 10g ASM and starts up automatically on server boot-up. In RAC 10g
environments, the full Oracle Cluster-ware (CRS) is installed on every RAC node.

Since CSS provides cluster management and node monitor management, it inherently monitors ASM and
its shared storage components (disks and diskgroups). Upon startup, ASM will register itself and all
diskgroups it has mounted, with CSS. This allows CSS across all RAC nodes to keep diskgroup metadata
in-sync. Any new diskgroups that are created are also dynamically registered and broadcasted to other
nodes in the cluster.

As with the database, internode communication is used to synchronize activities in ASM instances. CSS is
used to heartbeat the health of the ASM instances. ASM internode messages are initiated by structural
changes that require synchronization; e.g. adding a disk. Thus, ASM uses the same integrated lock
management infrastructure that is used by the database for efficient synchronization.

13
Disks are removed from the diskgroup only on write operation I/O error and not for read operations. For
read errors, the block is read from the secondary extents (only for normal or high redundancy) or from
external redundancy mirror pair (this is performed by the storage array).
14
Note for external redundancy, column FREE_MB is equal to USABLE_FREE_SPACE.

14
Database Instances

A Database instance is the standard Oracle instance, and is the client of the ASM instance. The database to
ASM communication is always intra-node; i.e., the database will not contact the remote (in case of RAC)
ASM instance for servicing database requests.

After the ASM disk group is created, DBCA can now be used to create the database15. DBCA has three
options for database file structures: filesystem, raw, or ASM. See the ASM Installation section for an
overview of the 10.2 new features of DBCA.

If ASM is selected, then all available diskgroups, if already created for that ASM instance, will be listed.
Selecting ASM and a diskgroup will invoke DBCA to create a database within ASM. If no diskgroups
exist or a new diskgroup is desired, then DBCA offers the opportunity to create a new diskgroup. Note, an
ASM diskgroup can house all the Oracle physical files; ranging from controlfiles, datafiles, spfiles and
RMAN backup files. However, Oracle executables, CRS files (OCR and voting disks) and non-database
files cannot be housed in ASM diskgroups. Throughout this document all Oracle physical files will be
generically referred to as database files.

An active database instance which uses ASM storage, can then operate just like a typical database instance;
i.e., all file access is directly performed without extensive ASM intervention. Database instances interact
with the ASM instance when files are created, deleted or opened. At this time the file layout is read from
the ASM instance and all subsequent I/O’s are done using the extent map stored in the database instance.
ASM and DB instances also interact if the storage configuration changes; e.g., when disks are added,
dropped, or fail.

Although the ASM layer is transparent to the database clients and users on the server, all datafile access can
only be done via the database instance and its utilities. For example, database backups of ASM based files
can only be performed with RMAN. Note, utilities like the Unix dd command are not recommended for
backing up or restoring ASM diskgroups.

The database file level access (read/write) of ASM files is similar to pre-10g, except that any database file
name that begins with a “+”, will automatically be handled and managed using the ASM code path.
However, with ASM files, the database file access inherently has the characteristics of raw devices; i.e., un-
buffered (direct IO) with kernelized asynchronous IO (KAIO).

Database consolidation and clustering

In Oracle Database 10g Release 1, RAC and single instance databases could not be managed by the same
ASM instance. This created challenges in implementing storage Grid architectures and consolidated
database solutions. Oracle Database 10g Release 2 enhances the ASM functionality in a clustered
environment allowing one ASM instance per node to manage all database instances in the cluster.
Therefore, an ASM instance on a given node can now manage storage for single instance or RAC database
instances. This feature relieves the customer from maintaining more than one ASM instance needed to
serve the different database types that might exist in the cluster, thus obviating the need for DBAs to
manage separate storage pools.

This new feature leverages Oracle Clusterware to economically consolidate multiple islands of databases
into a single clustered pool of storage managed by ASM. This essentially allows customers to optimize
their storage utilization by eliminating wasted over-provisioned storage and save money by reducing their
overall footprint of database storage.

15
DBCA can also be used to create the ASM instance and disk groups

15
Once the database is created and the instance is active, the database instance will become a client of ASM.
This is reflected in the V$ASM_CLIENT view16. V$ASM_CLIENT contains one row for every ASM
diskgroup that is opened by a database instance 17. In Figure 7 below, V$ASM_CLIENT displays two
databases connected to ASM, with each instance using two diskgroups. Note, instance cubs is a 10.2 RAC
enabled database and brew is a 10.1.0.4 single instance.

Figure 7.

SQL> select INSTANCE,NAME,STATUS,SOFTWARE_VERSION,COMPATIBLE_VERSION from


v$asm_client;

INSTANCE STATUS SOFTWARE_VRSN COMPATIBLE_VRSN

------- ------------ ------------ ------------

cubs1 CONNECTED 10.2.0.1.0 10.2.0.1.0

cubs1 CONNECTED 10.2.0.1.0 10.2.0.1.0

brew CONNECTED 10.1.0.3.0 10.1.0.2.0

brew CONNECTED 10.1.0.3.0 10.1.0.2.0

ASM – Database Multi-Version Support

In Oracle Database 10g Release 2, ASM transparently supports both older and newer software versions of
the database. Both forward and backward compatibility is also maintained between all Oracle Database
10g Release 1 and Release 2. Therefore, any combination of 10.1.u.v and 10.2.x.y for either ASM or
database instances interoperate seamlessly without administrative intervention. The only requirement is
that a database connecting to a 10.2 ASM instance must be at version 10.1.0.3 or later. Additionally, for
RAC environments, CRS must be at the highest level.

Allowing mixed versions of the ASM and database instances, allows for relaxed migration strategies and
higher availability. For example, a database upgrade will not force an upgrade of the ASM instance.

When mixing software versions, ASM functionality reverts to the functionality of the earliest version in
use. For example, a 10.1.0.3 database instance working with a 10.2.0.0 ASM instance does not exploit new
features in ASM 10.2.0.0. Conversely, a 10.2 database instance working with a 10.1.0.3 ASM instance, will
not exploit any of the new 10.2 features of ASM; e.g., XDB Folders. V$ASM_CLIENT has two new
columns that gives you information about the software version number and corresponding database
compatibility level. Note V$ASM_CLIENT exists on both ASM and database instance, therefore the
content of the view will be in respect to the instance from where its queried.
The SOFTWARE_VERSION column gives the version number of the database or ASM instance for the
selected diskgroup connection. The COMPATIBLE_VERSION column gives the compatible setting of the
database or ASM instance for the selected diskgroups connection. Figure 7 above shows a 10.1.0.4
database connected to 10.2 ASM instance.

16
This view on database instance contains rows if the database has open files in ASM (CONNECTED
state) or has accessibility to any diskgroup (MOUNTED state).

16
Database processes to support ASM

In a database instance there are three sets of processes added to support the ASM diskgroups and
infrastructure.

o RBAL – This process performs global opens of all the disks in the disk groups
o ASMB – This process contacts CSS using the diskgroup name, and acquires the associated ASM
connect string. This connect string is then used to connect into ASM instance18. Using this
persistent connection, periodic messages are exchanged to update statistics and provide a
heartbeat mechanism. During operations that require ASM intervention, such as a file creation
by a database foreground, the database foreground connects directly to the ASM instance to
perform the operation. Upon successful completion of file creation, database file extent maps
are sent by ASM to ASMB. Additionally, ASMB also sends database IO statistics to ASM
instance.
o O00x – A group of slave processes establish connections to the ASM instance, where x is a
number from 1 to 10. Through this connection pool, database processes can send messages to the
ASM instance. For example opening a file sends the open request to the ASM instance via a
slave. However slaves are not used for long running operations such as creating a file. The slave
(pool) connections eliminate the overhead of logging into the ASM instance for short requests.
These slaves are shutdown when not in use.

Database Init.ora parameters to support ASM

The SGA parameters for database instance needs slight modification to support ASM extent maps and other
ASM information. Note if the 10g Automatic Memory Management feature is being used on the database
instance, then the following sizing data can be treated as informational only or as supplemental data in
gauging appropriate values for the SGA. Oracle highly recommends using the Automatic Memory
Management feature. The following are guidelines for SGA sizing on the database instance:
Processes = Add 16

Large_pool = Add additional 600k

Shared_pool – Additional memory is required to store extent maps. Aggregate the values from the
following queries to obtain current database storage size that is either already on ASM or will be stored in
ASM. Then determine the redundancy type that is used (or will be used), and calculate the shared_pool,
using the aggregated value as input.

select sum(bytes)/(1024*1024*1024) from v$datafile;

select sum(bytes)/(1024*1024*1024) from v$logfile a, v$log b


where a.group#=b.group#;

select sum(bytes)/(1024*1024*1024) from v$tempfile where


status='ONLINE';

For diskgroups using external redundancy = (Every 100Gb of space needs


1Mb of extra shared pool) + 2M

For diskgroups using Normal redundancy: (Every 50Gb of space needs 1Mb of
extra shared pool) + 4M.

18
This connection is always a bequeath connection into ASM instance.

17
For diskgroups using High redundancy: (Every 33Gb of space needs 1Mb of
extra shared pool) + 6M.

ASM and database shutdown dependencies

Since ASM manages diskgroups and hold the database files and its metadata, a shutdown of the ASM
instance will cause all client-database instances to shutdown as well19. In normal shutdowns, the ASM
instance will begin shutdown and wait for all sessions to disconnect, just as typical database instances.
Note, however that ASM has a persistent database instance connection, thus the database instances must be
shutdown first, in order for ASM to complete shutdown.

In case of ASM SHUTDOWN IMMEDIATE or ABORT, ASM will immediately terminate any open
connections (including the database instance connections), and as a result, all dependent databases will
immediately abort; i.e., databases will ungracefully shutdown.

In a single ASM instance configuration, if the ASM instance fails while diskgroups are open for update,
ASM instance recovery will be performed upon the restart of ASM by reading the disk group logs. In RAC
environments, with multiple ASM instances sharing disk groups, if one ASM instance should fail, another
node’s ASM instance automatically recovers transient ASM metadata changes caused by the failed
instance; i.e. performs instance recovery. In RAC environments, the srvctl command can be used to create
dependencies between ASM and the database(s), such that a graceful shutdown of ASM will shutdown the
dependent database instances. Additionally, this dependency also ensures that ASM startups before the
dependent database are started.

$ srvctl modify instance –d crm -i crm1 –s asm1

ASM and database deployment Best Practices

ASM provides out-of-the-box enablement of redundancy and optimal performance. However, the
following items should be considered to increase performance and/or availability:

1. Implement multiple access paths to the storage array using two or more HBAs or initiators.

2. Deploy multi-pathing software over these multiple HBAs to provide IO load-balancing and
failover capabilities.

3. Use diskgroups with similarly sized and performing disks. A diskgroup containing large number
of disks provides a wide distribution of data extents, thus allowing greater concurrency for I/O and
reduces the occurrences of hotspots. Since a large diskgroup can easily sustain various I/O
characteristics and workloads, a single (database area) diskgroup can be used to house database
files, logfiles, and controlfiles.

4. Use diskgroups with four or more disks, and making sure these disks span several backend disk
adapters.

5. As stated earlier, Oracle generally recommends no more than two diskgroups. For example, a
common deployment can be four or more disks in a database diskgroup (DATA diskgroup for
example) spanning all back-end disk adapters/directors, and 8-10 disks for the Flash Recovery
Area Diskgroup. The size of the Flash area will depend on what is stored and how much; i.e., full
database backups, incremental backups, flashback database logs and archive logs. Note, an active

19
If an ASM instance fails, all Oracle database instances dependent on that ASM instance also fail. This is
similar to a failure of a volume manager in the OS.

18
copy of the controlfile and one member of each of the redo log group are stored in the Flash
Recovery Area.

See the Oracle High Availability Architecture and Best Practices Manual for more details on these topics.

Storage Management and Allocation

A database created under the constructs of ASM will be striped by default and mirrored; as specified in the
SAME methodology; i.e., the I/O load is evenly distributed and balanced across all disks within the
diskgroup. In our example, the database will be striped across four disks, and redundancy will be handled
by the storage array. The striping is done on a file-by-file basis, using a 1MB stripe size, as opposed to
other LVMs that do striping and mirroring at a disk-volume level. ASM 1MB stripe depth has proved to be
the best stripe depth for Oracle databases. This optimal stripe depth coupled with even distribution of
extents in the diskgroup, prevents the occurrence of hot spots.

ASM allocates space in units called allocation units or AU. ASM always creates one-allocation-unit (AU)
extents across all of the disks in a disk group20. For diskgroup with similarly sized disks, there should be
an equal number of AU extents on every disk. A database file is broken up into file extents. There are two
types of file extent distributions: coarse and fine. For coarse distribution, each coarse grain extent file
extent is mapped to a single allocation unit. With fine grain distribution, each grain is interleaved 128K
across groups of 8 AUs. Fine distribution breaks up large sized I/O operations, into multiple 128K I/O
operations that can execute in parallel, which benefits sequential I/Os. Coarse and fine grain attributes are
pre-defined, as part of system templates, for all system related files; e.g., redo and archive log files are
defined as fine grain, whereas, datafiles are coarse.

Rebalance and Redistribution

With traditional volume managers, expansion/growth or shrinkage of striped filesystems has typically been
difficult. With ASM, these disk/volume changes are now seamless redistribution (rebalancing) of the
striped data and can be performed online.

Any change in the storage configuration will trigger a rebalance. The main objective of the rebalance
operation is to always provide an even distribution of file extents and space usage across all disks in the
diskgroup. Rebalancing is performed on all database files on a per file basis21; however, some files may
not require a rebalance. The Oracle background process, RBAL, from the ASM instance manages this
rebalance.

The Rebalance process examines each file extent map, and the new extents are re-plotted on to the new
storage configuration. For example, consider an 8-disk diskgroup, with a datafile with 40 extents (each
disk will house 5 extents). When 2 new drives of same size are added, that datafile is rebalanced and
spread across 10 drives, with each drive containing 4 extents. Only 8 extents need to move to complete the
rebalance; i.e., a complete redistribution of extents is not necessary, only the minimum number of extents
are moved to reach equal distribution.

20
The AU size is currently not user configurable.
21
A weighting factor, influenced by disk size and file size, affects rebalancing. A larger drive will
consume more extents. This is done for even distribution based on overall size.

19
The following is a typical process flow for ASM rebalancing:

1. On the ASM instance, a DBA adds (or drops) a disk to (from) a diskgroup.
2. This invokes the RBAL process to create the rebalance plan and then begin coordination of the
redistribution
3. RBAL will calculate estimation time and work required to perform the task and then message the
ARBx processes to actually handle the request. The number of ARBx processes invoked is
directly determined by the asm_power_limit.
4. The Continuing Operations Directory (metadata) will be updated to reflect a rebalance activity.
5. Each extent to be relocated is assigned to an ARBx process.
6. ARBx performs rebalance on these extents. Each extent is locked, relocated, and unlocked. This
is shown as Operation REBAL in V$ASM_OPERATION.

Rebalancing involves physical movement of file extents. This impact is generally low because the
rebalance is done one AU at a time; therefore, there will only be one outstanding I/O at any given time, per
ARBx processes. This should not adversely affect online database activity. However, it is generally
advisable to schedule the rebalance operation during off-peak hours.

The init.ora parameter asm_power_limit is used to influence the throughput and speed of the rebalance
operation22. The range of values for asm_power_limit are 0 to 11; where a value of 11 is full throttle and a
value of 1 is low speed. A value of 0, which turns off automatic rebalance, should be used with caution.

The power value can also be set for a specific rebalance activity using the alter diskgroup command. This
value is only effective for the specific rebalance task. See Figure 8. A power value of 0 indicates that no
rebalance should occur for this rebalance. This setting is particularly important when adding or removing
storage (that has external redundancy), and then deferring the rebalance to a later scheduled time.

Figure 8.

“Session1 SQL”> alter diskgroup DATA add disk '/dev/rdsk/c3t19d39s4' rebalance


power 11

From another session

“Session2 SQL”> select * from v$asm_operation

OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES

----- ---- ---------- ---------- ---------- ---------- ---------- -----------

1 REBAL WAIT 11 0 0 0 0 0

1 DSCV WAIT 11 0 0 0 0 0

(time passes…………..)

OPERA STAT POWER ACTUAL SOFAR EST_WORK EST_RATE EST_MINUTES

------------ ----- ---- ---------- ---------- ---------- ---------- ----------

1 REBAL REAP 11 2 25 219 485 0

22
The asm_power_limit is specific to each ASM instance.

20
If removing or adding several disks, for best practices, it is best to add or remove drives all at once, this
will reduce the number rebalance operations that are needed for storage changes.

An ASM diskgroup rebalance is an asynchronous operation, in that the control is returned immediately to
DBA after the operation is sent in the background, with the status of the ongoing operation query-able from
V$ASM_OPERATION. However, there are situations when the diskgroup operation needs to synchronous;
i.e., wait until rebalance is completed. In Oracle Database 10g Release 2, ASM alter diskgroup commands
which result in a rebalance now have the option of specifying the option to wait. This allows for accurate
[sequential] scripting that may rely on the space change from a rebalance completing before any subsequent
action is taken. For instance, if you add 100GB of storage to a completely full disk group, you won' t be
able to use all 100GB of storage until the rebalance completes.

If a new rebalance command is entered while one is already in progress in wait mode, the prior command
will not return until the diskgroup is in a balanced state or the rebalance operation encounters an error.

An example SQL script below illustrates how the WAIT option can be used in SQL scripting. The script
below adds a new disk, /dev/raw/raw6, and waits until the add and rebalance operations complete,
returning the control back to the script. The subsequent step adds a large tablespace.

#An example script to test WAIT option


alter diskgroup data add disk '/dev/raw/raw6' rebalance power 2 wait;

#login into database and create a tablespace for the next month’s Order Entry
data
sqlplus oe_dba/oe1@proddb << EOF
create BIGFILE tablespace May_OE datafile size 800 Gb
<< EOF

With external redundancy, the dropping and adding of disks in the diskgroup is very seamless, and
becomes more of an exercise of scheduling the rebalancing. However, with failure groups, some planning
and forethought maybe required with respect to how disks are removed and added. The best practices for
this exercise will be covered in a follow-up article.

Files and Aliases

Files are the objects that database instances access. Files come in the form of datafiles, controlfiles, spfiles,
and redo logfiles, and several other file types23. The ASM filename syntax is different than the typical
naming standards; yet resembles the filesystem approach of the 9i OMF feature. ASM file names are
derived and generated at the successful creation of a datafile. ASM file names are in the format of

“+diskgroup_name/database_name/database file type/tag_name.file_number.incarnation”

For example, the system tablespace name in the sample case is: “+DATA/orcl/datafile/system.258.3”.

Note that the tag name in the datafile name corresponds to the tablespace name. For redo logfiles, the tag
name is the group number; e.g., +DATA/orcl/onlinelog/group_3.264.3
The file number in the ASM instance can be used to correlate and determine filenames in database instance.
The illustration below shows the relationship between a database instance’s file identity and ASM’s. Note,
the file number from V$ASM_FILE is embedded in the file name.

23
There are a total of 12 file types.

21
Figure 9a.

+ASM (ASM instance)


SQL> select file_number , sum(bytes)/(1024*1024) from v$asm_file group by
file_number

FILE_NUMBER SUM(BYTES)/(1024*1024)
----------- ----------------------
256 360.007813
257 35.0078125
258 450.007813
261 .002441406
262 150.007813
263 23.0078125
264 10.0004883
265 5.0078125
266 10.0004883
267 10.0004883
268 2.2109375

ORCL (database instance)


SQL> select name from v$datafile

NAME
----------------------------------------
+DATA/orcl/datafile/sysaux.256.3
+DATA/orcl/datafile/system.258.3
+DATA/orcl/datafile/undotbs1.257.3
+DATA/orcl/datafile/users.265.3
+DATA/orcl/datafile/nitin.263.3
/u01/oradata/orcl/ishan01.dbf

SQL> select member from v$logfile;

MEMBER
-------------------------------------------------------------------------------
-
+DATA/orcl/onlinelog/group_3.264.3
+DATA/orcl/onlinelog/group_2.266.3
+DATA/orcl/onlinelog/group_1.267.3

Note; this database contains ASM files and a non-ASM file named ISHAN01.dbf. The ASM list, from the
Figure 9a, does not have an entry for the ISHAN datafile since it is not ASM managed. It is important to
note, that an Oracle Database 10g database can have files that reside on filesystems, raw and ASM,
simultaneously.

The filename notation described thus far is called the Fully Qualified FileName (FQFN) notation. FQFN
are generally long and awkward, therefore, to make file-naming convention easier to remember the ASM
Alias name format was introduced.
ASM Aliases are essentially in hierarchical directory format, similar to the filesystem hierarchy:
/u01/oradata/dbname/datafile_name
Alias names specify a disk group name, but instead of a file and incarnation number, a user-friendly string
name is used. Alias ASM filenames are distinguished from fully qualified or numeric names because they
do not end in a dotted pair of numbers. The following is an example of an ASM Alias:24:

24
An alias can be assigned during file creation, or can be created for an existing file using the ALTER
DISKGROUP <diskgroup> ADD ALIAS <alias name> for < file name> command.

22
Figure 9b

#Create ASM directory

SQL> alter diskgroup DATA add directory '+DATA/oradata/orcl;

#Now create the alias

SQL> alter diskgroup DATA add alias '+DATA/oradata/orcl/nitin01.dbf' for


+DATA/orcl/datafile/nitin.263.3;

For best practices, every database should implement the Oracle Managed File (OMF) feature. OMF can be
enabled by setting the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST parameters. OMF
will allow simpler file naming and also provides better file handling. For example, if a tablespace is
dropped, ASM will delete the file automatically only if it is an OMF datafile. If OMF is not used, then the
DBA must drop the file manually by connecting to the ASM instance. OMF files are located in the
DB_CREATE_FILE_DEST directory25. Please review OMF documentation on usage and implementation.
Note, OMF is not used anytime file names are provided in create/alter tablespace add datafile commands.

Aliases are particularly useful when dealing with controlfiles and spfiles; e.g., an Alias ASM filename is
normally used in the CONTROL_FILES and SPFILE initialization parameter. In Figure 10 below,
SPFILE and Control_files parameter are set to the alias, and the db_create_file_dest and
db_recovery_file_dest are set to the appropriate OMF destinations.

spfile = +DATA/orcl/spfileorcl.ora

control_files = +DATA/orcl/controlfile/control_01.ctl

db_create_file_dest = +DATA

db_recovery_file_dest = +FLASH

To show the hierarchical tree of files stored in the diskgroup, use the following connect by clause SQL to
generate the full path. Note that you need to have a sufficiently large shared_pool_size in your ASM
instance to execute this query. A more efficient way to browse the hierarchy is to use EM.

SELECT concat('+'||gname, sys_connect_by_path(aname, '


/'
)) full_alias_path FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
START WITH (mod(pindex, power(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex;

25
There are other *_DEST variables that can be used for other file types.

23
Enhanced management for ASM
The following new features provide alternative means to access, manage and manipulate ASM files.

ASM Command Line Interface (ASMCMD)

In Oracle Database Oracle Database 10g Release 1, SQLPlus and EM were the only means to manage and
access ASM. In 10.2, a new ASM Command Line Interface (ASMCMD) utility has been introduced, to
provide an easy alternative for accessing the files and directories within the ASM diskgroups. The
ASMCMD utility provides short commands that have the same functionality as their SQL equivalents; i.e.,
this obviates the need to build complex SQL statements to query ASM information. Additionally, the
ASMCMD syntax is very similar to Unix shell commands, thus simplifying ASM management for those
administrators not accustomed to the SQLPlus interface or SQL coding. ASMCMD will work against 10.1
ASM Instances as well.
Note, ASMCMD does not encompass all the functionality of SQLPLus or EM interface, thus Oracle still
highly recommends that the Enterprise Manager interface (using either Grid or Database Control) be used
as a primary means to manage ASM, and utilize ASMCMD for quick, short commands or for batch
command processing.

The ASMCMD interface provides both an interactive and non-interactive modes. The interactive mode
provides a shell-like environment where the user is prompted to issue the commands listed below. The
non-interactive mode executes a single command and exits the utility. The latter is made available for
scripting and batch processing purposes. ASMCMD requires that the ASM instance be started and the
diskgroups to be managed are mounted; i.e., ASMCMD cannot mount diskgroups.

Interactive command line


Invoking ASMCMD with no command arguments starts an interactive shell environment.
Interactive Mode command line examples:

% asmcmd
ASMCMD> cd +DATA/RAC/DATAFILE
ASMCMD> ls –l
Type Redund Striped Time Sys Name
DATAFILE MIRROR COARSE MAR 28 02:03 Y EXAMPLE.276.553572261
DATAFILE MIRROR COARSE MAR 28 02:03 Y SYSAUX.275.553572195
DATAFILE MIRROR COARSE MAR 30 09:03 Y SYSTEM.256.553536323
DATAFILE MIRROR COARSE MAR 30 09:03 Y UNDOTBS1.258.553536325
DATAFILE MIRROR COARSE MAR 30 09:03 Y UNDOTBS2.268.553536665
DATAFILE MIRROR COARSE MAR 30 09:03 Y USERS.259.553536325

When the “-p” option is used to launch ASMCMD, the current path is shown in the
prompt. For example:
% asmcmd -p
ASMCMD [+]> cd +DATA/RAC/DATAFILE
ASMCMD [+DATA/RAC/DATAFILE]> ls
EXAMPLE.276.553572261
SYSAUX.275.553572195
SYSTEM.256.553536323
UNDOTBS1.258.553536325
UNDOTBS2.268.553536665
USERS.259.553536325

The du command operates similar to the Unix du –s command, it shows used disk space.

% asmcmd -p
ASMCMD [+] > cd +DATA/RAC/ARCHIVELOG
ASMCMD [+DATA/RAC/ARCHIVELOG]> du
Used_Mb Mirror_used_Mb
2504 2504

24
List of commands:
help [command] Help listing
pwd Show current directory (current
directory)
cd <dir> Change directory
find [-t <type>] <dir> <alias> File search
ls [-lsdrtLaH] [alias] Directory listing
mkdir <dir1 dir2 . . .> Directory creation
rm <file1, file2 . . .> File and directory deletion
mkalias <system_alias> <user_alias> User alias creation
rmalias [-r] <user_alias1 user_alias2 . . .>
User alias deletion
du [–H] [dir] Disk usage
lsdg [-H] [group] Diskgroup info
lsct [-H] [group] Client info
exit exit

Non-interactive Mode command line:


% asmcmd ls -L +DATA/RAC/DATAFILE
EXAMPLE.267.553536489
SYSAUX.257.553536323
SYSTEM.256.553536323
UNDOTBS1.258.553536325
UNDOTBS2.268.553536665
USERS.259.553536325

asmcmd find -t DATAFILE +DG1 %


+DG1/ORA102/DATAFILE/TEST.256.560274299

You must use the ‘%’ as the wildcard for non-interactive mode queries.

DBMS_FILE_TRANSFER Utility Enhancements

The DBMS_FILE_TRANSFER is a stored procedure that was introduced in 9i. This package provided a
means to copy files between two locations (on same host or between database servers). In 10.1, ASM
leverages this utility to copy files between ASM diskgroups, and is the primary utility used to instantiate an
ASM DataGuard database. In 10.2, DBMS_FILE_TRANSFER has been enhanced to support all
combinations of ASM and non-ASM file transfers as follows:
• ASM to ASM
• ASM to OS file
• OS file to ASM
• OS file to OS file

These changes now provide DBAs with another method to migrate database files into and out of ASM
storage.

25
XML ASM Virtual Folder

Oracle Database 10g ASM Release 2 leverages the virtual folder feature in XML DB, providing a
mechanism to access and manipulate ASM files and folders via XML DB protocols such as FTP and
HTTP/DAV and programmatic APIs26. The ASM virtual folder is mounted as /sys/asm within the XML
DB hierarchy. The folder is virtual, in the sense that the ASM folders and files are not actually stored
within XML DB. However, any operation on the ASM virtual folder is transparently handled by the
underlying ASM component.
The ASM virtual folder is created by default during the installation of XML DB. If the database is not
configured to use ASM, then this folder will be empty and no operations will be permitted on it. If ASM is
configured, the ASM virtual folder, /sys/asm, is mounted within the XML DB hierarchy. Any operation on
the ASM virtual folder is transparently handled by the underlying ASM component. The ASM folder
contains one sub-folder for every mounted disk group. Each diskgroup folder will contain one sub-folder
corresponding to every database name. In addition it may contain additional files and folders corresponding
to the aliases that have been created by the administrators.

ASM Virtual Folder access via FTP

File Transfer Protocol (FTP) is also available for copying datafiles in and out of the ASM diskgroup
offering additional flexibility for managing your files. Determine the appropriate FTP port defined for your
environment. The example below describes how to extract, copy, and delete Datapump dumpsets from an
ASM directory,+Flash/dumpsets, which equates to the following XML DB folder, /sys/asm/flash/dumpsets.

[oracle@edrsr14p1 oracle]$ ftp edrsr14p1 2100


Connected to edrsr14p1 (139.185.35.114).
220- edrsr14p1
220 edrsr14p1 FTP Server (Oracle XML DB/Oracle Database) ready.
Name (edrsr14p1:oracle): system
331 pass required for SYSTEM
Password:
230 SYSTEM logged in
Remote system type is Unix.
ftp> cd /sys/asm/flash/orcl/dumpsets
250 CWD Command successful
ftp> get expdp_5_5.dat
local: expdp_5_5.dat remote: expdp_5_5.dat
227 Entering Passive Mode (139,185,35,114,100,234)
150 ASCII Data Connection
278530 bytes received in 0.153 secs (1.8e+03 Kbytes/sec)
ftp> put expdp_03_26.dat
ftp> del expdp_03_20.dat
ftp> rename expdp_03_26.dat latest_dumpset.dmp

Templates

ASM file templates are a named collections of attributes applied to files during file creation. Templates
simplify file creation by housing complex file attribute specifications. When an ASM template is applied
to a file, that file will have all the attributes of that template. When a disk group is created, ASM
establishes a set of initial system default templates associated with that disk group. These templates contain
the default attributes for the various Oracle database file types. The administrator may change attributes of
the default templates. Additionally, administrators can add their own unique templates, as required. This
template specification enables simpler and accurate file creation attributes. System default templates

26
To determine the XML DB HTTP and FTP ports, go to the EM Administration page, then under the
XML Database section go to the XML Configuration link.

26
cannot be deleted. If you need to change an ASM file attribute after the file has been created, the file must
be copied, into a new file with the new attributes. This is the only method of changing a file’s attributes.

SQL> alter diskgroup DATA add template all_new_files attributes (unprotected


fine)

Once the template is created, it can be applied to a new tablespace.

SQL> create tablespace ISHAN datafile ‘+DATA/ishan(all_new_files)’ size 100M;

Simplified ASM migration using EM

The EM Migration utility is a new feature in Oracle Database 10g Release 2 that provides simplified
database migration to ASM based storage. This utility migrates 10.2 and later databases residing on file
system or raw devices to Automatic Storage Management (ASM) on the same host. The EM Migration
utility can be accessed from the Maintenance tab of the Database Instance page of EM.

The purpose of EM Migration utility is to automate and simplify the database migration process to ASM.
The EM Migration utility provides the following benefits:

• Supports migrating all database files, recovery related files, and spfile.
• Provides a wizard walk-through to create migration requests, process and schedule migration requests,
as well as report statuses of migration requests.

• Migrates database using the same DBNAME, DBID and SID

• Provides a fully recoverable fallback database, if user decides to go back to non-ASM based storage
(assuming that ASM storage and existing storage exist at the same time)

• Monitors the migration process


• Supports online and offline database migration

The EM Migration utility essentially uses Recovery Manager (RMAN) commands to migrate control files,
datafiles, and recovery related files from filesystem to ASM.27 The migration task is created and submitted
as a job via EM job system. The task specification process includes walking through the wizard, which
involves real-time validation of user requests. The submitted job includes the following major sub-steps:
o Construct init parameters
o Migrate database files with RMAN
o Migrate temp files and online logs with PL/SQL procedures
o Cleanup.

The following URL Link also describes how to perform migration to ASM
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gASMMigration.pdf

27
For files such as online logs and temp files, the utility will migrate them by dropping and recreating the
files, using PL/SQL procedures, and then recreate them in ASM.

27
Conclusion

ASM provides a point solution focused on unified interface database storage management. This complete
solution facilitates Grid Computing, lowers overall manageability costs as well as helping to maintain
serviceability levels. The following is a review of the high points of ASM:

Reduce administration complexity:

• Simplifies or eliminates daily database administration tasks


• Automatic I/O tuning for all types of workloads
• Reduces the number of objects to manage, because vertical integration of file system and
volume manager into the Oracle kernel reduces complexity, as one disk group replaces
multiple file systems. Also, with OMF, you do not have to worry about managing files,
just tablespaces.
• Simplifies database storage configuration changes: Automatic data copy on disk, add and
drop. Online migration to new storage hardware.
• Reduced downtime: ASM prevents accidental file deletion because there is no file system
interface, and ASM is responsible for file management.

Reduces costs of storage management and Increases Utilization

• Reduces the cost of managing storage.


• Provides a mechanism for Database Storage Consolidation
• Provides clustered volume manager and file system functionality integrated with the
database.
• Makes block devices as simple to administer as file servers.
• Works with any type of disk from modular storage, NAS devices to SAN disk arrays.

Improves Performance, Scalability, and Reliability

• Raw disk I/O performance for all files


• Stripe files across multiple storage arrays
• Higher storage resource utilization
• ASM-Lib API improves CPU utilization to reap I/O and provide priority and caching hints
• Overcomes file system size limitations
• Implements mirroring to protect against storage array failure.

For more information on ASM, please visit the ASM website on OTN:

http://www.oracle.com/technology/products/database/asm/index.html

28
Appendix A. Views and Discover Strings

The following table illustrates the various used and their relationship with the instance
type.
ASM related V$ Views ASM Instance Database Instance
V$ASM_DISKGROUP row/diskgroup discovered Row/diskgroup available
to the local database
instance.
V$ASM_CLIENT row/client connected row for local ASM
instance if DB contains
open ASM files
V$ASM_DISK Row/disk discovered, Row/disk, across all
across all diskgroups as diskgroups used by the
well as disks that are local database instance.
not in disk groups
V$ASM_FILE Row/file allocated; N/A
across all client
instances-diskgroups
V$ASM_TEMPLATE Row/template Row/template w/
associated-attached
diskgroup
V$ASM_ALIAS Row/file alias N/A
V$ASM_OPERATION Row/per active ASM N/A
operation (could be more
than 1 operation per
invocation)

29
Appendix B. Migrating individual non-ASM datafiles to ASM

This section will describe how to migrate individual tablespaces to ASM, whilst the database is online.
This illustration assumes that there is a tablespace named ISHAN located on a mounted filesystem.
Additionally, this same procedure can also be used to move a datafile from one diskgroup to another
diskgroup.

1. Connect to RMAN
RMAN> connect target

2. Make the target tablespace offline or read-only


RMAN> sql "alter tablespace ISHAN offline";

3. Copy the ISHAN tablespace to the DATA diskgroup


RMAN> backup as copy tablespace ishan format '
+DATA'
;

4. On successful copy of the tablespace, switch all the datafiles in the ISHAN tablespace to ASM.
Determine the file number of the files to be switched (use v$datafile).

RMAN> switch datafile 6 to copy;

When the following message is received, it is deemed that the tablespace is


migrated successfully.
datafile 6 switched to datafile copy "+DATA/orcl/datafile/ishan.314.1"

Note, the original filesystem copy still exists, and can be deleted.

30
Appendix C. Datapump filesets and ASM

This section describes how to create DataPump export dumpsets within ASM diskgroups.

1. Create a directory from ASM. This step is only necessary, if you plan on storing the dumpset in a
specific directory

alter diskgroup flash add directory '


+flash/dumpsets'
;

2. Create a directory in database

create directory dumpsets as '+flash/dumpsets';

3. create a logfile directory, since logfiles cant be stored in ASM

create directory logfile_dest as '/u01/admin/dumpsets';

4. expdp parfile=expdp_parfile
where expdp_parfile contents are:
userid=system/manager1 directory='DUMPSETS' dumpfile=expdp_5_5.dat
job_name=full_export logfile=logfile_dest:exp5_5.log

5. Validate that the file is created within ASM.

SQL> select file_number, bytes, creation_date from v$asm_file


where type ='DUMPSET';

FILE_NUMBER BYTES CREATION_

----------- ---------- ---------

581 2785280 07-MAY-04

31
Appendix D. Creating extra controlfiles in ASM

This following procedure can be used to create additional controlfiles in the ASM diskgroup.

1. Create a new directory in your disk group using commands similar to the following ones (this is
optional).

alter diskgroup <DG_name> add directory '+<DG_name>/<DB_name>';

alter diskgroup <DG_name> add directory


'+<DG_name>/<DB_name>/controlfile';

2. Then, edit the control file entry in your SPFILE/init.ora to point to the new
 
   

     

   
controlfile. Where the new controlfile is

*.control_files='+DATA/0ra10g/controlfile/current.260.3,’+FLASH/Ora1
0G/controlfile/o1_mf_0fh3t83f_.ctl,’DATA2/ora10g/controlfile/control
03.ctl'

3. Shutdown cleanly your database (all instances if using RAC)

shutdown immediate;

4. Startup the database in nomount mode

startup nomount;

5. Using RMAN, restore the controlfile from one of the existing locations to the new one.
restore controlfile to 'DATA2/Ora10g/controlfile/control03.ctl' from
'+FLASH/Ora10g/controlfile/current.260.3';

6. Mount the database and recover the database if necessary.

SQL> alter database mount;

7. Open the database.

SQL> alter database open;

32
Appendix E. ASM SQL scripts

The following scripts are outlined to assist the use in managing ASM files; i.e., listing or deleting. Note,
Oracle recommends using 10g EM Grid Control or Database Control for performing these functions.

1. This SQL statement generates a script to list the ASM files for a given database. The list is order
by listing first the directory structure and then the files. The script assumes that the files were
created using the asm file name conventions, in specific it assumes that the given database name is
present in the alias name (full_path column). The query will prompt for the database name.

full_path variable in the query refers to the alias name. The dir column indicates if it is a directory or
not, and sys column indicates if the alias was created by the system or not.

COL full_path FORMAT a120


COL dir FORMAT a3
COL sys FORMAT a3
SET ECHO OFF
SET VERIFY OFF
SET FEEDBACK OFF
SET PAGESIZE 39
SET LINESIZE 130

--get database name


ACCEPT database PROMPT 'Enter the database name: '

--spool file
spool lis_&database..lst

--generates list
SELECT full_path, dir, sys FROM
(SELECT CONCAT('+'||gname, SYS_CONNECT_BY_PATH(aname,'/')) full_path, dir,
sys FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex, a.ALIAS_DIRECTORY dir, a.SYSTEM_CREATED sys
FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
START WITH (MOD(pindex, POWER(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex
ORDER BY dir desc, full_path asc)
WHERE full_path LIKE UPPER('%/&database%');

--end spool
spool off
-- resetting format
COL full_path CLEAR
COL dir CLEAR
COL sys CLEAR
SET VERIFY ON
SET HEADING ON
SET FEEDBACK ON

33
2. This SQL statement will generate a script that will delete files in ASM for a given database. The
script assumes that the files were created using the ASM naming conventions. Specifically, it
assumes that the given database name is present in the alias name (full_path column), in the form
of "/<db_name>/". The scripts will prompt for the database name.

COL gsql FORMAT a300


SET ECHO OFF
SET VERIFY OFF
SET HEADING OFF
SET FEEDBACK OFF
SET PAGESIZE 0
SET LINESIZE 600
SET TRIMSPOOL ON

--get database name


ACCEPT database PROMPT 'Enter the database name: '

--spool file
spool del_&database..sql

--generates file list


SELECT 'ALTER DISKGROUP '||gname||' DROP FILE '''||full_path||''';' gsql
FROM
(SELECT CONCAT('+'||gname, SYS_CONNECT_BY_PATH(aname,'/')) full_path, gname
FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex, a.ALIAS_DIRECTORY adir
FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
WHERE adir='N'
START WITH (MOD(pindex, POWER(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex)
WHERE full_path LIKE UPPER('%/&database/%');

34
3. This SQL statement will generate a script to drop ASM directories.

SELECT 'ALTER DISKGROUP '||gname||' DROP DIRECTORY '''||full_path||''';'


gsql FROM
(SELECT CONCAT('+'||gname, SYS_CONNECT_BY_PATH(aname,'/')) full_path, gname
FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex, a.ALIAS_DIRECTORY adir
FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
WHERE adir='Y'
START WITH (MOD(pindex, POWER(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex
ORDER BY full_path desc)
WHERE full_path LIKE UPPER('%/&&database%');

--end spool
spool off
-- resetting format
COL gsql CLEAR
SET VERIFY ON
SET HEADING ON
SET FEEDBACK ON
SET PAGESIZE 30
SET LINESIZE 600
SET TRIMSPOOL OFF

35
Appendix F. ASM IOSTAT utility – ASMIOSTAT

#!/bin/ksh
#
# NAME
# asmiostat.sh
#
# DESCRIPTION
# iostat-like output for ASM
# $ asmiostat.sh [-s ASM ORACLE_SID] [-h ASM ORACLE_HOME] [-g Diskgroup]
# [-f disk path filter] [<interval>] [<count>]
#
# NOTES
# Creates persistent SQL*Plus connection to the +ASM instance implemented
# as a ksh co-process
#
#
#ORACLE_SID=+ASM
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
NLS_DATE_FORMAT=' DD-MON-YYYY HH24:MI:SS'

endOfOutput="_EOP$$"

typeset -u diskgroup
typeset diskgroup_string="Disk Group: All diskgroups"
typeset usage="
$0 [-s ASM ORACLE_SID] [-h ASM ORACLE_HOME] [-g diskgroup] [<interval>] [<count>]

Output:
DiskPath - Path to ASM disk
DiskName - ASM disk name
Gr - ASM disk group number
Dsk - ASM disk number
Reads - Reads
Writes - Writes
AvRdTm - Average read time (in msec)
AvWrTm - Average write time (in msec)
KBRd - Kilobytes read
KBWr - Kilobytes written
AvRdSz - Average read size (in bytes)
AvWrSz - Average write size (in bytes)
RdEr - Read errors
WrEr - Write errors
"

while getopts ":s:h:g:f" option; do


case $option in
s) ORACLE_SID="$OPTARG" ;;
h) ORACLE_HOME="$OPTARG"
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH ;;
g) diskgroup="$OPTARG"
diskgroup_string="Disk Group: $diskgroup" ;;
f) print ' -f option not implemented';;
:) print "Option $OPTARG needs a value"
print "$usage"
exit 1 ;;
\?) print "Invalid option $OPTARG"
print "$usage"
exit 1 ;;
esac
done
shift OPTIND-1

typeset -i interval=${1:-10} count=${2:-0} index=0

36
#
# Verify interval and count arguments are valid
#
(( interval <=0 || count<0 )) && {
print 'Invalid parameter: <interval> must be > 0; <count> must be >= 0'
print "$usage"
exit 1
}

#
# Query to run against v$asm_disk_stat
#
if [[ -z $diskgroup ]]; then
query="select group_number, disk_number, name, path, reads, writes, read_errs, write_errs, read_time, write_time,
bytes_read, bytes_written from v\$asm_disk_stat where group_number>0 order by group_number, disk_number;"
else
query="select group_number, disk_number, name, path, reads, writes, read_errs, write_errs, read_time, write_time,
bytes_read, bytes_written from v\$asm_disk_stat where group_number=(select group_number from v\$asm_diskgroup_stat
where name=regexp_replace(' $diskgroup'
,'^\+'
,'
')) order by group_number, disk_number;"
fi

#
# Check for version 10.2 or later
#
typeset version minversion=10.2
version=$($ORACLE_HOME/bin/exp </dev/null 2>&1 | grep "Export: " | sed -e '
s/^Export: Release \([0-9][0-9]*\.[0-9][0-
9]*\).*/\1/' )
if ! (print "$version<$minversion" | bc >/dev/null 2>&1); then
print "$0 requires Oracle Database Release $minversion or later"
exit 1
fi

#############################################################################
#
# Fatal error
#----------------------------------------------------------------------------
function fatalError {
print -u2 -- "Error: $1"
exit 1
}

#############################################################################
#
# Drain all of the sqlplus output - stop when we see our well known string
#----------------------------------------------------------------------------
function drainOutput {
typeset dispose=${1:-' dispose' } output
while :; do
read -p output || fatalError ' Read from co-process failed [$0]'
if [[ $QUERYDEBUG == ON ]]; then print $output; fi
if [[ $output == $endOfOutput* ]]; then break; fi
[[ $dispose != ' dispose']] && print -- $output
done
}

#############################################################################
#
# Ensure the instance is running and it is of type ASM
#----------------------------------------------------------------------------
function verifyASMinstance {
typeset asmcmdPath=$ORACLE_HOME/bin/asmcmd
[[ ! -x $asmcmdPath ]] && fatalError "Invalid ORACLE_HOME $ORACLE_HOME: $asmcmdPath does not exist"
$asmcmdPath pwd 2>/dev/null | grep -q ' ^\+$'|| fatalError "$ORACLE_SID is not an ASM instance"
}

#############################################################################

37
#
# Start the sqlplus coprocess
#----------------------------------------------------------------------------
function startSqlplus {
# start sqlplus, setup the env
$ORACLE_HOME/bin/sqlplus -s ' / as sysdba'|&

print -p 'whenever sqlerror exit failure'\


&& print -p "set pagesize 9999 linesize 9999 feedback off heading off" \
&& print -p "prompt $endOfOutput" \
|| fatalError '
Write to co-process failed (startSqlplus)'
drainOutput dispose
}

#############################################################################
#############################################################################
# MAIN
#----------------------------------------------------------------------------
verifyASMinstance
startSqlplus

#
# Loop as many times as requested or forever
#
while :; do
print -p "$query" \
&& print -p "prompt $endOfOutput" \
|| fatalError '
Write to co-process failed (collectData)'
stats=$(drainOutput keep)

print -- "$stats\nEOL"

index=index+1
(( count<index && count>0 )) && break

sleep $interval
done | \
awk '
BEGIN { firstSample=1
}
/^EOL$/ {
firstSample=0; firstLine=1
next
}
{
path=$4
if (path ~ /^ *$/) next
group[path]=$1; disk[path]=$2; name[path]=$3

reads[path]=$5; writes[path]=$6
readErrors[path]=$7; writeErrors[path]=$8
readTime[path]=$9; writeTime[path]=$10
readBytes[path]=$11; writeBytes[path]=$12

# reads and writes


readsDiff[path]=reads[path]-readsPrev[path]
writesDiff[path]=writes[path]-writesPrev[path]

# read errors and write errors


readErrorsDiff[path]=readErrors[path]-readErrorsPrev[path]
writeErrorsDiff[path]=writeErrors[path]-writeErrorsPrev[path]

# read time and write time


readTimeDiff[path]=readTime[path]-readTimePrev[path]
writeTimeDiff[path]=writeTime[path]-writeTimePrev[path]

38
# average read time and average write time in msec (data provided in csec)
avgReadTime[path]=0; avgWriteTime[path]=0
if ( readsDiff[path] ) avgReadTime[path]=(readTimeDiff[path]/readsDiff[path])*1000
if ( writesDiff[path]) avgWriteTime[path]=(writeTimeDiff[path]/writesDiff[path])*1000

# bytes and KB read and bytes and KB written


readBytesDiff[path]=readBytes[path]-readBytesPrev[path]
writeBytesDiff[path]=writeBytes[path]-writeBytesPrev[path]
readKb[path]=readBytesDiff[path]/1024
writeKb[path]=writeBytesDiff[path]/1024

# average read size and average write size


avgReadSize[path]=0; avgWriteSize[path]=0
if ( readsDiff[path] ) avgReadSize[path]=readBytesDiff[path]/readsDiff[path]
if ( writesDiff[path] ) avgWriteSize[path]=writeBytesDiff[path]/writesDiff[path]

if (!firstSample) {
if (firstLine) {
"date" | getline now; close("date")
printf "\n"
printf "Date: %s Interval: %d secs %s\n\n", now, '"$interval"'
, "'
"$diskgroup_string"'
"
printf "%-40s %2s %3s %8s %8s %6s %6s %8s %8s %7s %7s %4s %4s\n", \
"DiskPath - DiskName","Gr","Dsk","Reads","Writes","AvRdTm",\
"AvWrTm","KBRd","KBWr","AvRdSz","AvWrSz", "RdEr", "WrEr"
firstLine=0
}
printf "%-40s %2s %3s %8d %8d %6.1f %6.1f %8d %8d %7d %7d %4d %4d\n", \
path " - " name[path], group[path], disk[path], \
readsDiff[path], writesDiff[path], \
avgReadTime[path], avgWriteTime[path], \
readKb[path], writeKb[path], \
avgReadSize[path], avgWriteSize[path], \
readErrorsDiff[path], writeErrorsDiff[path]
}

readsPrev[path]=reads[path]; writesPrev[path]=writes[path]
readErrorsPrev[path]=readErrors[path]; writeErrorsPrev[path]=writeErrors[path]
readTimePrev[path]=readTime[path]; writeTimePrev[path]=writeTime[path]
readBytesPrev[path]=readBytes[path]; writeBytesPrev[path]=writeBytes[path]
}
END {
}
'
exit 0

39

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy