PT Oracle
PT Oracle
Performance Tuning - planning for future growth, this guide equips you with
the knowledge to build and maintain fast, stable, and
scalable Oracle databases.
Complete Guide
BRIJESH MEHRA
Oracle Database 19c Performance Tuning -
Complete Guide
INTRODUCTION TO ORACLE DATABASE PERFORMANCE TUNING
Oracle Database is a leading relational database management system renowned for its robustness, scalability, and ability
to handle complex workloads. Performance tuning is the process of optimizing the database to ensure it meets
application demands with minimal response times and efficient resource utilization. This guide provides a
comprehensive exploration of Oracle Database performance tuning, covering methodologies, tools, practical examples,
and advanced techniques to help database administrators and developers enhance database efficiency. Performance
tuning addresses bottlenecks at multiple levels, including SQL queries, instance configuration, memory management,
disk input/output operations, application design, and system resources. The goal is to reduce query response times,
minimize contention, and ensure scalability under varying workloads. By leveraging Oracle’s architecture and built-in
tools, administrators can proactively monitor and optimize performance..
To effectively tune an Oracle Database for optimal performance, a comprehensive understanding of its underlying
architecture is essential. Oracle Database is a complex system composed of several critical components, broadly
categorized into memory structures, background processes, and physical storage elements. Each of these plays a pivotal
role in the database's overall functionality, responsiveness, and reliability.
Memory Structures:
Oracle utilizes two primary memory structures—the System Global Area (SGA) and the Program Global Area (PGA). The
SGA is a shared memory region allocated during instance startup and is accessible by all server and background
processes. It stores crucial data such as the database buffer cache (which holds copies of data blocks), the shared pool
(which caches SQL execution plans and dictionary data), the redo log buffer (used to store redo entries before they are
written to disk), and other components like the Java pool and large pool. Effective tuning of the SGA can significantly
reduce disk I/O and improve query performance.
On the other hand, the PGA is allocated to each server process individually and contains data that is private to that
process. It includes session-specific information such as sort area, session variables, and other runtime control
structures. Unlike the SGA, the PGA is not shared between processes. Tuning the PGA is essential for operations
involving sorting, hashing, and bitmap creation, especially in OLAP workloads.
Background Processes:
Several background processes support the internal operation of the Oracle instance. Among the most critical are:
• DBWn (Database Writer): Responsible for writing modified blocks from the buffer cache to the datafiles on disk.
• LGWR (Log Writer): Writes redo log entries from the redo log buffer to the online redo log files, ensuring
transaction durability.
• SMON (System Monitor): Performs crash recovery when the database is started after a failure and handles tasks
like temporary segment cleanup.
• PMON (Process Monitor): Cleans up after failed user processes by releasing resources and rolling back
uncommitted transactions.
• CKPT (Checkpoint Process): Updates the control files and datafile headers to record that a checkpoint has
occurred, which helps reduce recovery time.
Additional processes such as ARCn (Archiver), RECO (Recoverer), and MMON (Manageability Monitor) may also be active
depending on the database configuration.
Physical Storage:
At the physical layer, Oracle stores data in a variety of file types, each with specific roles:
• Redo log files store all changes made to the data, providing the ability to recover transactions in the event of
failure.
• Control files maintain the structure of the database, including the locations of datafiles and redo logs, as well as
other critical metadata.
Together, these components form the core of the Oracle Database architecture. A deep understanding of how they
interact allows a DBA to fine-tune memory allocation, optimize process efficiency, and ensure that the storage layout
supports the performance and reliability needs of the applications it serves.
Performance Tuning Methodology
To tune an Oracle Database effectively, understanding its architecture is essential. The database consists of the instance,
which includes memory structures like the System Global Area and Program Global Area, and background processes
such as the Database Writer, Log Writer, and Checkpoint process. The database itself comprises physical files, including
datafiles, control files, and redo logs. The System Global Area stores cached data blocks, shared SQL areas, and redo
buffers, while the Program Global Area manages memory for sorting, hashing, and session-specific operations.
Background processes handle tasks like writing data to disk, logging transactions, and maintaining database consistency.
A performance baseline, capturing metrics like transaction volumes, response times, CPU usage, and input/output
statistics, is critical for tuning. For example, using the Automatic Workload Repository, administrators can collect
baseline data during peak periods, such as 9:00 AM to 11:00 AM, to compare against future performance issues.
Common bottlenecks include poorly tuned SQL, insufficient memory allocation, disk contention, and latch contention. A
systematic tuning approach involves setting performance goals, measuring current metrics, and iteratively applying
optimizations. For instance, to create a baseline snapshot, use:
```sql
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT;
END;
```
The first step in optimization is identifying which queries are consuming the most resources. Tools like Oracle Enterprise
Manager (OEM), Automatic Workload Repository (AWR), and SQL Trace/TKPROF can be used to profile workload and
generate performance reports. These tools help pinpoint expensive queries based on execution time, I/O, buffer gets,
and CPU time. A direct method is querying the dynamic performance view V$SQL, which provides real-time metrics on
SQL statements:
SELECT SQL_ID, EXECUTIONS, ELAPSED_TIME, DISK_READS FROM V$SQL ORDER BY ELAPSED_TIME DESC;
This allows you to quickly find long-running queries and investigate further.
One of the common performance pitfalls in Oracle databases is excessive hard parsing due to the use of literal values in
SQL statements. For instance:
-- INEFFICIENT
SELECT * FROM EMPLOYEES WHERE EMP_ID = 123;
This causes Oracle to generate a new execution plan each time a different literal value is used. Instead, using bind
variables ensures reuse of existing execution plans:
-- OPTIMIZED
SELECT * FROM EMPLOYEES WHERE EMP_ID = :EMP_ID;
Benefits include reduced CPU usage, less contention on shared memory structures, and improved scalability.
Indexes are essential for reducing full table scans and accelerating data retrieval. A well-placed index—such as a B-tree
index on a column frequently used in WHERE conditions, JOINs, or ORDER BY clauses—can drastically improve
performance. However, over-indexing or having poorly chosen indexes can lead to performance degradation during
insert/update operations.
Regularly monitor index usage using DBA_HIST_SQLSTAT and DBA_HIST_SQL_PLAN to ensure they’re being used
efficiently.
4. Avoiding SELECT *** and Reducing Data Transfer
Fetching unnecessary columns increases I/O, network load, and memory usage. Instead of:
Use:
Complex or nested queries may perform poorly. Refactoring them into simpler and more efficient forms is often
beneficial. Some examples:
-- INEFFICIENT
SELECT E.NAME, E.DEPARTMENT_ID FROM EMPLOYEES E WHERE EXISTS (SELECT 1 FROM DEPARTMENTS D WHERE D.DEPARTMENT_ID
= E.DEPARTMENT_ID AND D.LOCATION_ID = 1700);
-- OPTIMIZED
SELECT E.NAME, E.DEPARTMENT_ID FROM EMPLOYEES E JOIN DEPARTMENTS D ON E.DEPARTMENT_ID = D.DEPARTMENT_ID
WITH RECENT_ORDERS AS (SELECT * FROM ORDERS WHERE ORDER_DATE > SYSDATE - 30) SELECT CUSTOMER_ID, COUNT(*) FROM
RECENT_ORDERS GROUP BY CUSTOMER_ID;
Before and after any tuning, it's critical to inspect the execution plan using EXPLAIN PLAN FOR or
DBMS_XPLAN.DISPLAY_CURSOR:
Look for operations such as Full Table Scan, Nested Loop, or Hash Join and determine if they're optimal given the data
volume and access pattern.
Oracle's Cost-Based Optimizer (CBO) relies heavily on up-to-date statistics to generate efficient execution plans. Use
DBMS_STATS to gather statistics regularly:
In certain cases, hints like /*+ INDEX(emp emp_idx1) */ or /*+ FIRST_ROWS */ can guide the optimizer, though they
should be used sparingly and only when justified by testing.
In modern database environments, especially those involving complex reporting and analytical workloads, performance
can often be constrained by repeated execution of the same expensive queries. These queries, when run frequently and
against data that does not change very often, place unnecessary load on the system. To alleviate this, Oracle offers
advanced mechanisms such as query result caching and materialized views, which serve as powerful tools to enhance
efficiency and responsiveness.
Query caching is a technique that allows the database to store the result of a previously executed query in memory.
When the same query is issued again—without any changes to the underlying data—the stored result can be retrieved
directly, eliminating the need for reprocessing. This drastically reduces CPU consumption and disk I/O, as the engine
skips parsing, optimization, and execution phases. It also reduces concurrency issues by lowering contention on
frequently accessed tables. Query caching is particularly useful in environments where the same query is executed
hundreds or thousands of times per hour, such as in dashboards, reporting tools, and mobile or web applications that
display static summary data.
Materialized views serve a broader purpose. Unlike a standard view, which is a virtual representation and always
reflects the latest state of the data, a materialized view physically stores the output of a query. These are especially
helpful when the query involves complex joins, aggregations, or subqueries. By storing the output on disk, the database
can deliver the results almost instantaneously, without recalculating from the base tables each time. The materialized
view can be refreshed on demand or scheduled at intervals, depending on how fresh the data needs to be. This
approach strikes a balance between performance and data accuracy. In environments where perfect real-time data is
not critical, materialized views allow systems to scale and perform optimally even under heavy loads.
Using these features effectively requires thoughtful consideration. For example, administrators must assess how often
the data changes, how costly the original query is, and whether the performance gains justify the memory or storage
overhead. In some cases, combining both strategies—caching for quick reads and materialized views for heavy
summarizations—provides a layered optimization approach. When implemented appropriately, these techniques not
only reduce latency but also free up resources for other database operations.
Conclusion
Conclusion (for Easy Understanding)
SQL query optimization is far more comprehensive than just rewriting individual SQL queries. It represents a system-
wide strategy aimed at ensuring that a database system not only runs faster but also uses resources—such as memory,
CPU, and disk I/O—more efficiently. The ultimate goal is to deliver high performance, scalability, and responsiveness to
users and applications, regardless of the size or complexity of the underlying data.
Rather than being a single task, SQL optimization involves a combination of processes, tools, and design choices that
work together to reduce the time it takes for the database to process and return results. It looks at the big picture—how
data is accessed, how it's stored, how often it's queried, and how the database engine decides the best way to handle
each request. The benefits of optimization go beyond speed; they also contribute to better hardware utilization,
reduced operational costs, and improved user experience.
One of the foundational principles in optimization is avoiding repetitive and unnecessary work. In many systems, the
same queries are run repeatedly with little or no change. Reprocessing these queries each time can be wasteful. This is
where features like query result caching and materialized views provide huge advantages. These mechanisms allow
Oracle to store the outcome of frequently executed or resource-intensive queries, so that when they are called again,
the results are retrieved directly from memory or disk—without redoing the expensive calculations or data scanning.
This is similar to solving a tough math problem once and writing down the answer. The next time someone asks, you can
just read the answer instead of solving it again. In database terms, this means faster responses, lower CPU
consumption, and improved system throughput, all without needing to change how your application code is written.
However, true performance optimization isn't achieved through shortcuts alone. It’s a collaborative and continuous
process that requires the active involvement of both developers and database administrators (DBAs). Developers need
to write clean, efficient queries and understand how their applications interact with the database. DBAs, on the other
hand, must ensure that the database is well-configured, properly indexed, and closely monitored for performance
issues.
• Designing queries that fetch only what is needed, avoiding unnecessary columns or tables.
• Creating and maintaining indexes on the right columns to speed up data access.
• Reviewing execution plans to understand how Oracle processes each query and identify potential bottlenecks.
• Keeping database statistics up-to-date, so the query optimizer can make smart decisions.
In large-scale systems or production environments, this kind of proactive tuning can make the difference between a
slow, frustrating experience and a responsive, reliable application. Performance problems often grow over time if not
addressed early—especially as data grows or the number of users increases.
Here’s your content reorganized into 10 clear and separate points, each focused on one specific aspect of SQL query
optimization, without mixing concepts:
These practices, when applied consistently, help build fast, efficient, and scalable Oracle-based systems that can grow
with business needs In summary, SQL query optimization is a continuous process. You don’t just do it once—you keep
doing it as the system grows, more users come in, and data increases. The goal is to make sure the database remains
fast, stable, and able to handle demand without wasting system resources. When all parts of the optimization puzzle
come together—query design, indexing, caching, materialized views, and regular monitoring—the result is a database
system that performs well, scales easily, and keeps both users and stakeholders satisfied.This approach doesn't just help
performance; it also reduces costs, improves reliability, and prepares the system to support more users or larger
workloads in the future. For any DBA or developer—even those just starting out—understanding and applying these
concepts step by step will lead to much better database systems in the long run.
Memory Tuning: SGA and PGA
Sure! Here’s a version roughly five times broader and more detailed than your original text about Memory Tuning: SGA
and PGA in Oracle 19c:
Memory Tuning in Oracle 19c: Detailed Overview of SGA and PGA Management
Oracle Database 19c provides robust mechanisms for dynamic memory management that allow the database to adjust
memory allocation automatically based on the workload demands. The two primary memory areas managed in Oracle
are the System Global Area (SGA) and the Program Global Area (PGA). Effective tuning of these memory structures is
crucial for achieving optimal database performance and efficient resource utilization.
The SGA is a shared memory region that contains data and control information for the Oracle instance. It is shared by all
server and background processes. The SGA includes multiple components, such as the shared pool, buffer cache, redo
log buffer, large pool, Java pool, and others.
Oracle 19c supports dynamic memory tuning of the SGA via parameters like SGA_TARGET and SGA_MAX_SIZE:
• SGA_TARGET defines the total size of all SGA components combined. Oracle automatically distributes this
memory among its components based on workload.
• SGA_MAX_SIZE is the upper limit on the total SGA size and must be set equal to or larger than SGA_TARGET. It
provides a ceiling for dynamic resizing.
• Shared Pool: The shared pool caches SQL execution plans, dictionary information, and other control structures.
Proper sizing is vital to reduce hard parses, which occur when SQL statements cannot be reused and must be
fully parsed again, leading to higher CPU consumption and slower query execution.
• Buffer Cache: This is the memory area where Oracle caches data blocks read from disk. A well-sized buffer cache
minimizes physical I/O by keeping frequently accessed data in memory, significantly improving query response
time. The buffer cache can be divided into multiple subcaches, such as the default buffer cache, keep cache, and
recycle cache, each serving different purposes.
• Redo Log Buffer: This buffer stores redo entries (change vectors) before they are written to the redo log files.
Proper tuning helps reduce disk I/O during transaction processing.
• Large Pool and Java Pool: These specialized pools support large memory allocations for operations like backup
and recovery, parallel query execution, and Java code execution within the database.
Oracle 19c’s automatic management dynamically adjusts the size of these pools within the limits set by SGA_TARGET,
but DBAs often manually adjust individual components when specific workloads demand fine-tuned control.
2. Program Global Area (PGA)
The PGA is a private memory area used by individual server processes to store session-specific data such as sort areas,
hash join areas, and session variables. Unlike the SGA, the PGA is not shared and is allocated separately for each user
session or process.
• PGA_AGGREGATE_TARGET: This parameter specifies the target total amount of PGA memory available for all
server processes combined. Oracle tries to keep the total PGA memory usage near this target.
• PGA_AGGREGATE_LIMIT: Sets an absolute limit on PGA memory usage for all server processes to avoid
excessive memory consumption that might impact the operating system.
The PGA significantly impacts query performance, especially for operations that require sorting, hashing, bitmap
merging, and other memory-intensive activities. If the PGA is under-allocated, Oracle will perform these operations on
disk (using temporary segments), which can drastically slow down query performance.
Oracle 19c supports Automatic PGA Memory Management, where the database dynamically adjusts the memory
granted to each session based on workload and the overall PGA target.
Oracle 19c also provides an Automatic Memory Management feature, which simplifies memory tuning by allowing
Oracle to manage both SGA and PGA dynamically through the MEMORY_TARGET and MEMORY_MAX_TARGET
parameters.
• MEMORY_TARGET specifies the total amount of memory Oracle can allocate for both SGA and PGA combined.
When AMM is enabled, Oracle continuously monitors workload and system resource usage and automatically
reallocates memory between SGA and PGA components as needed. This reduces the need for manual tuning and helps
maintain consistent performance under fluctuating workloads.
However, while AMM is powerful and convenient, it may not provide the best performance in all environments,
particularly in high-demand or highly specialized workloads. In such cases, manual tuning of SGA_TARGET,
PGA_AGGREGATE_TARGET, and individual subcomponents remains essential to fine-tune the system and achieve the
best balance between resource usage and performance.
• Avoiding excessive hard parses: Monitor the shared pool usage and configure its size to minimize hard parses.
Using SQL plan baselines and cursor sharing techniques also helps reduce parsing overhead.
• Sizing buffer cache appropriately: Analyze database buffer cache hit ratios and physical I/O metrics to
determine if the buffer cache is sized properly. Increasing buffer cache reduces disk I/O but comes at the cost of
using more memory.
• Adjusting PGA for complex queries: Monitor query execution plans and PGA memory statistics. Queries
involving large sorts or hash joins may require more PGA memory for optimal performance.
• Monitoring and adjusting dynamically: Use Oracle’s dynamic performance views (e.g.,
V$SGA_DYNAMIC_COMPONENTS, V$PGA_TARGET_ADVICE, and V$MEMORY_DYNAMIC_COMPONENTS) to
understand how memory is allocated and used. This insight helps inform tuning decisions.
• Consider workload patterns: OLTP workloads might benefit from larger shared pools and buffer caches, while
OLAP workloads might require more PGA memory for sorting and hashing operations.
SQL tuning is essential to make your database queries run faster and use fewer resources. At the heart of SQL tuning in
Oracle is the Cost-Based Optimizer (CBO), which decides the best way to run your SQL queries. This guide breaks down
the key concepts and techniques so you can understand and improve SQL performance step-by-step.
The CBO is Oracle’s decision-maker for SQL execution. It evaluates multiple possible ways to run a query and estimates
the "cost" in terms of CPU, I/O, and memory. The optimizer picks the plan with the lowest estimated cost, which usually
means faster execution.
The CBO relies heavily on statistics about tables and indexes—like the number of rows, data distribution, and index
health. Without accurate, up-to-date statistics, the optimizer might guess wrong and choose slow plans.
You should regularly collect statistics using Oracle’s DBMS_STATS package, especially after big data changes like loads or
deletes. Fresh statistics help the optimizer make better decisions.
SQL tuning means changing your SQL or the database environment to make queries faster. This can include rewriting
SQL, creating indexes, or adjusting optimizer settings.
5. Using the SQL Tuning Advisor
Oracle’s SQL Tuning Advisor is an automated tool that analyzes slow SQL and suggests improvements—like new indexes,
rewritten SQL, or refreshed stats. It’s a great starting point for troubleshooting.
SQL Plan Baselines store “good” execution plans. When Oracle parses a query, it compares possible plans with these
baselines and prefers proven fast plans. This keeps SQL performance stable over time, even after upgrades or stats
changes.
Bind variables act as placeholders in SQL. They let Oracle reuse execution plans for similar queries with different values,
reducing parsing overhead and saving CPU.
If you write queries with literal values (like WHERE id=123), Oracle has to parse each variant separately. Using bind
variables avoids this and improves shared pool usage.
Indexes let Oracle quickly find rows without scanning the whole table. Proper indexing on columns used in WHERE, JOIN,
or ORDER BY clauses is crucial for fast queries.
When filtering on several columns, a composite index (one index on multiple columns) often works better than several
single-column indexes. The column order in the index matters and should match query predicates.
If your queries filter on expressions like UPPER(name), normal indexes won’t help. Function-based indexes store the
computed value, allowing fast access even on transformed columns.
12. Avoid Over-Indexing
Too many indexes slow down inserts, updates, and deletes because Oracle must maintain them. Only create indexes
that your queries will actually use.
Execution plans show the steps Oracle takes to run your SQL, including scans, joins, and sorts. Tools like EXPLAIN PLAN
or DBMS_XPLAN.DISPLAY help you see if Oracle is using indexes or doing costly full scans.
SQL Profiles give the optimizer extra info about data distribution and cardinality, helping it pick better plans when
standard stats aren’t enough.
Hints are commands embedded in SQL to force the optimizer to use a specific method, like USE_NL for nested loops. Use
hints only after careful testing, as they override the optimizer’s judgment.
Selecting all columns wastes resources, especially if you only need a few. Limiting columns reduces data transfer and
memory usage.
Use Oracle tools like AWR and ASH to track slow queries and identify changes in execution plans or resource usage. This
proactive monitoring helps catch problems early.
Wait events like db file sequential read indicate what resources SQL is waiting for, helping you find if your queries are
I/O-bound or CPU-bound.
19. Adaptive Cursor Sharing Handles Different Bind Values
Oracle can create multiple execution plans for the same SQL depending on bind variable values, improving performance
for variable workloads.
For big data scans or reporting, Oracle can split work across CPU cores using parallel execution, reducing elapsed time
but requiring careful resource management.
Summary
SQL tuning revolves around helping the CBO make the best decisions. By keeping statistics fresh, using bind variables,
maintaining proper indexes, analyzing execution plans, and leveraging Oracle’s tuning tools, you can greatly improve
query performance and system responsiveness.
Efficient Input/Output (I/O) operations and effective storage management form the backbone of optimal Oracle
database performance. The speed and reliability of data reads and writes significantly influence how quickly queries
complete and how smoothly the entire system operates. When I/O throughput is insufficient or storage is poorly
configured, these factors become serious bottlenecks that degrade query response times and overall user experience.
Delays caused by slow disk operations cannot be fully compensated for by faster processors or increased memory alone.
In addition to storage and I/O concerns, Oracle’s concurrency and lock management mechanisms are essential for
maintaining data integrity during simultaneous multi-user access. However, if these locking processes are not carefully
monitored and tuned, they can lead to contention, blocking, and performance degradation. Balancing efficient I/O
handling with effective concurrency control is therefore vital to achieving consistent and scalable database performance.
This comprehensive guide explores practical strategies and best practices to optimize both I/O throughput and
concurrency management in Oracle environments, helping you ensure high performance and reliability.
Understanding I/O Tuning and Storage Optimization
Database I/O involves reading and writing data to disk, which is often the slowest part of query execution. Even
powerful CPUs or large memory caches can’t compensate for slow or overloaded storage. Thus, tuning I/O throughput
directly impacts query speed and system scalability.
ASM is Oracle’s integrated volume manager and file system. It simplifies storage management and evenly distributes I/O
across all disks in a disk group. This balancing reduces hotspots and improves throughput by parallelizing reads and
writes.
• Redo logs (LGWR writes sequentially and synchronously) should be on fast disks separate from datafiles.
• Datafiles hold actual table and index data and benefit from spreading across multiple disks or ASM disk groups.
• Temporary tablespaces used for sorting and joins can generate heavy I/O; placing them on separate fast storage
avoids interference.
For large table scans, Oracle can bypass the buffer cache and perform direct path I/O:
• Direct Path Reads read large chunks of data directly into the PGA, reducing buffer cache usage and CPU
overhead.
• Direct Path Writes write large temporary or sort segments directly to disk.
Properly configuring queries and tablespaces to allow direct path I/O reduces overhead and speeds up large queries.
Oracle background processes like DB Writer (DBWR) and Log Writer (LGWR) handle writing dirty buffers and redo log
buffers to disk:
• Adjust LGWR settings to avoid log write bottlenecks, which can stall user commits.
• Monitor wait events such as db file sequential read, db file scattered read, and log file sync to identify
contention points.
6. Using Oracle I/O Calibration Tool
Oracle provides an I/O Calibration Tool that simulates typical database I/O patterns and measures the storage
subsystem’s throughput and latency.
• Use results to determine if current storage meets workload demands or requires upgrades.
Oracle dynamic views like v$filestat, v$session_wait, and v$waitstat provide insights into I/O activity and wait events:
• v$filestat shows I/O operations per file, helping spot hot files.
• Use RAID levels optimized for your workload (RAID 10 for OLTP, RAID 5 or 6 for read-heavy workloads).
Oracle uses locks to maintain data consistency and prevent concurrent transactions from interfering with each other.
Locks protect data during transactions but can lead to contention if multiple sessions try to access the same resources
simultaneously.
• DBA_BLOCKERS and DBA_WAITERS: Identify blocking and waiting sessions causing lock contention.
• Use appropriate isolation levels (READ COMMITTED vs SERIALIZABLE) based on consistency and concurrency
needs.
Deadlocks occur when two or more sessions wait for each other’s locks:
• Oracle automatically detects and resolves deadlocks by rolling back one session.
• Analyze deadlock graphs (trace files) to identify root causes and fix SQL or transaction design.
Poorly designed queries that scan or lock large amounts of data increase contention:
Oracle uses row-level locking by default, which reduces contention compared to locking entire tables:
Wait events such as enq: TX - row lock contention indicate blocking caused by row locks. Monitoring and minimizing
such waits improve concurrency.
Oracle’s multi-version concurrency control allows readers to access consistent data without blocking writers, minimizing
read locks and improving concurrency.
• Use Enterprise Manager or Oracle Performance Hub for graphical lock and wait analysis.
• Educate developers and DBAs on best practices to avoid common locking pitfalls.
Summary
Efficient Oracle performance depends heavily on tuning I/O and storage configuration as well as managing concurrency
effectively. Leveraging ASM, separating files physically, enabling direct path I/O, and monitoring wait events optimizes
I/O throughput. Meanwhile, understanding and controlling locking behavior, avoiding long transactions, and monitoring
waits help maintain concurrency without contention. Combining these tuning strategies ensures a responsive, scalable
Oracle database.
.
Inside Oracle’s Brain: How AWR and ADDM Power
Intelligent Performance Tuning
Performance tuning in Oracle isn't just about fixing slow queries—it's about understanding the system as a whole. Two
of Oracle's most powerful diagnostic tools for this purpose are AWR (Automatic Workload Repository) and ADDM
(Automatic Database Diagnostic Monitor). Together, they form the foundation of Oracle's self-diagnostic and tuning
framework, offering deep insight into system performance over time.
What is AWR?
The Automatic Workload Repository (AWR) is a built-in performance monitoring feature in Oracle that collects and
stores system statistics automatically at regular intervals—by default, every 60 minutes. Each data capture is called a
snapshot, and these snapshots contain rich information about how the database is performing: CPU usage, memory
activity, disk I/O, wait events, SQL execution stats, and more.
Over time, these snapshots form a historical baseline that allows DBAs to compare database performance between
different periods. This is particularly useful when troubleshooting intermittent issues or analyzing the impact of
configuration changes.
• Top SQL statements consuming the most resources (CPU, I/O, elapsed time).
• Wait events that show where the database is spending time (e.g., I/O, locks, latches).
• Instance activity stats such as buffer cache efficiency, redo rates, and parsing.
• Memory usage patterns, helping tune SGA and PGA components effectively.
You can generate AWR reports via Oracle Enterprise Manager (OEM) or command-line scripts such as awrrpt.sql. These
reports compare two snapshots and generate an easy-to-follow summary of performance trends and anomalies.
Unlike AWR, which captures data in hourly snapshots, ASH samples session activity every second and retains data in
memory for a recent time window (typically the last 30–60 minutes, depending on workload and memory size).
• Investigating short-term or intermittent issues that don’t align with snapshot times.
ASH reports show what active sessions were doing, what they were waiting on, and how frequently—allowing you to
quickly zero in on hot spots.
You can generate ASH reports using scripts like ashrpt.sql, or query views such as v$active_session_history and
dba_hist_active_sess_history for deeper custom analysis.
What is ADDM?
The Automatic Database Diagnostic Monitor (ADDM) takes AWR data a step further by analyzing it and offering
actionable tuning recommendations. Every time a snapshot is taken, ADDM processes the data and identifies the root
causes of performance issues.
ADDM is particularly helpful because it doesn’t just highlight problems—it prioritizes them based on impact and offers
solutions, such as:
ADDM considers CPU usage, wait events, I/O, memory, and contention issues, giving a holistic picture of what's wrong
and how to fix it. This saves DBAs countless hours of manual analysis and guesswork.
In modern Oracle environments, performance tuning isn’t a luxury—it’s a requirement. Applications need to run faster,
databases must scale under load, and users expect instant responses. To meet these demands, Oracle offers a powerful,
built-in performance diagnostics suite: AWR (Automatic Workload Repository), ADDM (Automatic Database Diagnostic
Monitor), and ASH (Active Session History). These tools form the backbone of proactive and reactive database tuning
and are critical for ensuring long-term stability, scalability, and responsiveness.
Unlike traditional ad hoc monitoring, these tools provide systematic, time-aware, and actionable insights into how your
database is behaving—both in real time and over historical baselines.
Why They’re Indispensable:
AWR tells you what happened, and ADDM tells you why it happened and how to fix it, AWR and ADDM transform raw
database activity into structured, prioritized, and actionable insights. They reduce the time to resolve issues, help avoid
future problems, and support consistent performance tuning practices across teams. Every Oracle DBA—especially in
production or critical environments—should make them part of their regular performance toolkit. If you're tuning Oracle
without AWR, ADDM, and ASH, you're flying blind. Together, they provide depth, clarity, and confidence to performance
management. Make reviewing their outputs a regular habit—not just when there's a crisis—and you'll build a system
that's not only fast but resilient, scalable, and supportable.
Performance tuning in Oracle databases is both an art and a science. While Oracle provides numerous built-in tools and
automated features to assist with performance optimization, understanding core tuning principles and applying them
consistently is key to building a high-performing system. Whether you’re working with OLTP or OLAP workloads, these
best practices will help you identify bottlenecks, improve responsiveness, and ensure scalability.
Below are 30 essential best practices for effective performance tuning in Oracle, each explained in detail for real-world
applicability:
Start by classifying your workload—OLTP, OLAP, or hybrid. This helps prioritize tuning areas such as transaction
throughput vs. query performance.
Use DBMS_STATS to gather accurate and timely statistics. Stale stats mislead the Cost-Based Optimizer (CBO), resulting
in poor execution plans.
Bind variables reduce hard parsing and improve plan reuse. They also protect against SQL injection and reduce shared
pool fragmentation.
Full scans are costly unless the table is small. Ensure that indexes are used for selective queries. Use the INDEX hint
cautiously to enforce index usage if needed.
5. Create the Right Indexes
Create indexes based on WHERE clauses, JOIN columns, and ORDER BY operations. Use bitmap indexes for low-
cardinality data and composite indexes for multi-column filters.
Use tools like DBMS_XPLAN.DISPLAY_CURSOR, SQL Developer Autotrace, and AWR to compare execution plans over
time and detect regressions.
7. Avoid Over-Indexing
Too many indexes slow down INSERT/UPDATE/DELETE operations. Remove unused or redundant indexes after verifying
with v$object_usage.
Lock in known-good plans to prevent regressions after stats refreshes or database upgrades. This ensures stability in SQL
execution.
These advisors analyze poorly performing SQL and suggest improvements, including statistics updates, indexes, and SQL
rewrites.
Use range, list, or hash partitioning to manage large data volumes. Partitioning improves performance and simplifies
maintenance.
Enable parallelism for resource-heavy queries and ETL jobs. Monitor using v$pq_sysstat and adjust
PARALLEL_MAX_SERVERS.
Materialized views precompute results and refresh periodically, reducing runtime load for reporting queries.
Generate AWR reports to identify high-load SQL, top wait events, and bottlenecks. Use ADDM for actionable
recommendations based on AWR data.
ASH provides session-level history, allowing you to trace session waits, blocking sessions, and time-consuming
operations during specific windows.
Use v$session, v$system_event, and v$session_wait to detect bottlenecks like "db file sequential read" (index I/O) or
"log file sync" (commit latency).
16. Optimize Redo and Undo Configuration
Size redo logs to avoid frequent log switches. Tune undo tablespaces and retention for long queries and rollback
operations.
High latch contention in the shared pool or library cache can degrade performance. Use bind variables and avoid
excessive parsing.
Monitor v$sgainfo and v$pga_target_advice to allocate memory efficiently. Set appropriate values for SGA_TARGET,
PGA_AGGREGATE_TARGET, and DB_CACHE_SIZE.
Large sorts or hash joins may spill to disk. Monitor and resize temporary tablespaces accordingly. Use
TEMP_UNDO_ENABLED = TRUE for temporary undo.
Enable direct path for data loads and large table scans to bypass buffer cache and reduce CPU overhead.
Distribute redo, temp, and datafiles across ASM disk groups or separate storage tiers. Use
DBMS_RESOURCE_MANAGER.CALIBRATE_IO to benchmark.
Increase redo log size, avoid frequent commits, and place redo logs on fast storage to reduce wait times on log file sync.
Tune FAST_START_MTTR_TARGET and DB_WRITER_PROCESSES to control recovery time and I/O load during
checkpoints.
Identify blocks accessed frequently by multiple sessions using v$bh and spread load across partitions or hash-subdivided
data.
Minimize context switching between SQL and PL/SQL. Use bulk operations (BULK COLLECT, FORALL) and avoid row-by-
row processing.
Frequent commits create redo overhead. Batch DML operations to reduce redo and improve efficiency.
27. Tune Connection Pooling
Use efficient connection pooling from the application tier (WebLogic, JDBC UCP) to reduce login/logout overhead and
session churn.
Use DBA_BLOCKERS, DBA_WAITERS, and v$locked_object to resolve session-level blocking and deadlocks early.
Capture performance baselines using AWR or custom tools. Compare pre/post-deployment performance to validate
tuning impact.
Keep a tuning playbook for your environment: SQL baselines, parameter settings, stats collection jobs, and known issue
resolutions. Automate common tuning tasks using scripts or Enterprise Manager.
Below is a list of 30 of the most commonly used Oracle dynamic performance (V$) views for performance tuning. These
views provide insight into various aspects of your Oracle instance — from system-level statistics and SQL execution
details to session waits and I/O behavior. Each view is briefly explained to help you understand its role in the tuning
process.
Note: Many of these views require proper privileges and may need an Oracle Diagnostic Pack license to query.
1. V$DATABASE
Provides basic information about the database such as name, creation date, and current open mode. Useful for quickly
validating environment settings.
2. V$INSTANCE
Shows details about the current instance including status, startup time, and instance role. Essential for understanding
instance-level behavior.
3. V$PARAMETER
Lists the initialization parameters currently in effect. Adjusting parameters based on performance data is a crucial tuning
step.
4. V$OPTION
Displays which options and features are enabled in the database. Helps verify whether certain performance-enhancing
options are active.
5. V$STATNAME
Provides the names and descriptions of dynamic performance statistics. Acts as a dictionary for counters reflected in
other V$ views.
6. V$SYSSTAT
Contains cumulative system-wide statistics since the instance started. Useful for examining overall system performance
trends and baselining.
7. V$SESSTAT
Shows session-specific statistics. Helpful for correlating resource usage with individual sessions and identifying outliers.
8. V$SESSION
Offers detailed information about active and inactive sessions, including session status, SQL_ID, and identifiers for
troubleshooting.
9. V$PROCESS
Provides information about Oracle processes, including memory and CPU usage for background and user processes.
10. V$SESSION_WAIT
Displays current wait events for each session. Critical when diagnosing why a particular session is slow.
11. V$SYSTEM_EVENT
Aggregates wait event data across the entire instance — helping pinpoint system-wide bottlenecks like I/O, network, or
contention issues.
12. V$SQLAREA
Aggregates performance statistics on SQL statements across the instance. Helps identify frequently executed and
resource-intensive SQL.
13. V$SQL
Details current SQL statements and their execution statistics. Used to analyze active SQL and understand execution
performance.
14. V$SQL_PLAN
Provides the execution plan for SQL queries. Reviewing it reveals how the optimizer accesses data and whether indexes
are utilized.
15. V$SQL_PLAN_STATISTICS
Reports execution plan statistics after query execution, showing actual costs and row counts versus expected values.
16. V$SQL_MONITOR
Offers real-time monitoring data for long-running or resource-intensive SQL. Valuable for spotting performance issues in
real time.
Provides a sampled history of active sessions, revealing transient bottlenecks that might be missed in averaged data.
18. V$FILESTAT
Shows statistics about file-level I/O operations, tracking read and write performance of datafiles.
19. V$IOSTAT_FILE
Provides granular I/O statistics focused on specific files, helping identify disk bottlenecks.
20. V$TEMPSTAT
Reports statistics on temporary operations and space usage in temporary tablespaces. Key for diagnosing sorts that spill
to disk.
21. V$LATCH
Contains information about latches, low-level serialization mechanisms protecting shared memory. Excessive contention
indicates concurrency issues.
22. V$LATCHHOLDER
Identifies sessions currently holding latches. Useful when diagnosing latch contention problems.
23. V$LOCK
Displays all locks held by sessions. Helps identify blocking sessions and inter-session dependencies.
24. V$RESOURCE_LIMIT
Shows current usage and max values for system resources (e.g., sessions, processes). Assists with capacity planning and
resource allocation.
25. V$UNDOSTAT
Reports undo segment usage and historical undo activity, important for tuning undo performance and sizing undo
tablespaces.
26. V$SEGMENT_STATISTICS
Gathers I/O and operational statistics per database segment (tables or indexes). Identifies “hot” or underperforming
segments.
27. V$SYS_TIME_MODEL
Provides detailed CPU time usage statistics for the instance. Helps understand where CPU time is spent — user code,
system, or waits.
28. V$OSSTAT
Offers a snapshot of operating system statistics as seen by Oracle. Correlates database performance with physical
resource limits.
29. V$EVENT_NAME
Lists all wait events known to Oracle with classifications. Helps map wait events observed in other views to root causes.
30. V$SESSION_EVENT
Records wait event counts at the session level over time. Essential for understanding changes in session behavior
historically.
• Start with system-wide views like V$SYSSTAT and V$SYSTEM_EVENT to detect overall anomalies.
• Drill down to session-level and SQL-focused views such as V$SESSION, V$SQLAREA to isolate problematic queries
or sessions.
• Use historical data from V$ACTIVE_SESSION_HISTORY (ASH) to establish baselines and understand transient
issues.
• Compare metrics before and after tuning changes to verify improvements and avoid regressions.
• Correlate wait events with resource usage (e.g., high I/O waits in V$SYSTEM_EVENT combined with V$FILESTAT
data) to guide targeted tuning efforts.
Further Exploration
• Explore global V$ views (GV$) if using Oracle RAC to analyze cluster-wide behavior.
• Combine these views with Automatic Workload Repository (AWR) and Oracle Enterprise Manager (OEM) reports
for deeper diagnostics.
• Regularly querying and understanding these views builds intuition for spotting and solving performance
problems efficiently.
Final Advice: Driving Oracle Performance in the
Real World
Performance tuning isn’t just a theoretical exercise—it’s mission-critical in real production environments where
downtime or slowness directly affects business outcomes. In sectors like banking and pharmaceuticals, where data
throughput, accuracy, and regulatory compliance are paramount, Oracle performance tuning plays a pivotal role in
ensuring that systems are fast, reliable, and scalable.
In high-volume banking systems, even a minor increase in query response time can affect thousands of transactions. For
example:
• Use Case: A national bank noticed delayed ATM transactions during peak hours.
• Resolution: By using AWR/ADDM reports, they discovered slow I/O due to poor index usage on the transaction
logs. Creating the right composite indexes and tuning the SQL eliminated the bottleneck.
• Impact: Transaction time dropped from 7 seconds to under 1.5 seconds—boosting customer satisfaction and
reducing timeout failures.
Clinical trial data is often processed in batches, and delays can result in regulatory setbacks.
• Use Case: A pharmaceutical company had nightly ETL jobs exceeding their batch window.
• Resolution: ASH and SQL Monitor identified inefficient full table scans. SQL was rewritten using bind variables,
and optimizer statistics were updated.
• Impact: Batch jobs now complete 40% faster, ensuring timely reporting and compliance with FDA regulations.
3. Retail: Handling Holiday Season Load
E-commerce and retail applications need to scale fast during peak shopping periods.
• Use Case: During Black Friday, a retail platform experienced slow product searches and checkout processing.
• Resolution: SQL tuning and memory reconfiguration (SGA/PGA tuning) helped the platform handle 3x the traffic
with no degradation in user experience.
• Impact: Improved customer retention and conversion during the busiest shopping period.
KEY TAKEAWAYS
• Tune Before You Scale: Don’t just throw more hardware at the problem—optimize your SQL, memory, and I/O
usage first.
• Use Oracle’s Diagnostics: Tools like AWR, ADDM, and ASH provide a treasure trove of actionable insights. Learn
to read and interpret these reports regularly.
• Benchmark and Revalidate: Performance is not static. Regular benchmarking ensures that new releases,
patches, or changes don’t unknowingly degrade performance.
• Automate Where Possible: Use SQL Plan Management, Automatic Statistics Collection, and Resource Manager
to reduce manual intervention and maintain predictability.
Effective performance tuning is part art, part science. The principles, tools, and strategies discussed in this guide are
field-tested and proven across industries. When used consistently, they not only fix performance problems but help
build resilient, scalable, and efficient systems. Let this guide be your foundation—but continue learning, testing, and
tuning. The Oracle ecosystem evolves, and so should your skills.