0% found this document useful (0 votes)
13 views30 pages

CH 14 Updated

Ch14

Uploaded by

Utsav Raithatha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

CH 14 Updated

Ch14

Uploaded by

Utsav Raithatha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Chapter 14: Query Optimization

Introduction
● Alternative ways of evaluating a given query
● Equivalent expressions
● Different algorithms for each operation
● Select customer_name from branch,account,depositor where
branch.branch_name=account.branch_name and account.accountno=depositor.account_no
and branch.branch_city=‘Brooklyn’;

| 2
Introduction (Cont.)
● An evaluation plan defines exactly what algorithm is used for each operation, and how
the execution of the operations is coordinated.

| 3
Introduction (Cont.)

● Cost difference between evaluation plans for a query can be enormous


● E.g. seconds vs. days in some cases
● Steps in cost-based query optimization
1. Generate logically equivalent expressions using equivalence rules
2. Annotate resultant expressions to get alternative query plans
3. Choose the cheapest plan based on estimated cost
● Estimation of plan cost based on:
● Statistical information about relations. Examples:
• number of tuples, number of distinct values for an attribute
● Statistics estimation for intermediate results
• to compute cost of complex expressions
● Cost formulae for algorithms, computed using statistics

| 4
Generating Equivalent Expressions

| 5
Transformation of Relational Expressions

● Two relational algebra expressions are said to be equivalent if the two expressions
generate the same set of tuples on every legal database instance
● Note: order of tuples is irrelevant
● In SQL, inputs and outputs are multisets of tuples
● Two expressions in the multiset version of the relational algebra are said to be
equivalent if the two expressions generate the same multiset of tuples on every
legal database instance.
● An equivalence rule says that expressions of two forms are equivalent and can
replace expression of first form by second, or vice versa

| 6
Equivalence Rules
1. Conjunctive selection operations can be deconstructed into a sequence of
individual selections.

2. Selection operations are commutative.

3. Only the last in a sequence of projection operations is needed, the others can be
omitted.

4. Selections can be combined with Cartesian products and theta joins.

| 7
Equivalence Rules (Cont.)
5. Theta-join operations (and natural joins) are commutative.

6. (a) Natural join operations are associative:

(b) Theta joins are associative in the following manner:

where θ2 involves attributes from only E2 and E3.

| 8
Pictorial Depiction of Equivalence Rules

| 9
Equivalence Rules (Cont.)
7. The selection operation distributes over the theta join operation under the following
two conditions:
(a) When all the attributes in θ0 involve only the attributes of one of the expressions
(E1) being joined.

(b) When θ 1 involves only the attributes of E1 and θ2 involves only the attributes of E2.

| 10
Equivalence Rules (Cont.)
8. The projection operation distributes over the theta join operation as follows:
(a) if θ involves only attributes from L1 ∪ L2:

(b) Consider a join E1 θ E2.


● Let L1 and L2 be sets of attributes from E1 and E2, respectively.
● Let L3 be attributes of E1 that are involved in join condition θ, but are not in L1 ∪ L2,
and
● let L4 be attributes of E2 that are involved in join condition θ, but are not in L1 ∪ L2.

| 11
Equivalence Rules (Cont.)
9. The set operations union and intersection are commutative

(set difference is not commutative).


10.Set union and intersection are associative.

| 12
Equivalence Rules (Cont.)
11.The selection operation distributes over ∪, ∩ and –.

and similarly for ∪ and ∩ in place of –

Also:
and similarly for ∩ in place of –, but not for ∪

12. The projection operation distributes over union

| 13
Transformation Example: Pushing Selections Early

● Query: Find the names of all customers who have an account at some branch
located in Brooklyn.
Πcustomer_name(σbranch_city = “Brooklyn” (branch (account depositor)))

● Performing the selection as early as possible reduces the size of the


relation to be joined as follows.

● Transformation using rule 7a.


Πcustomer_name ((σbranch_city =“Brooklyn” (branch)) (account depositor))

| 14
Example with Multiple Transformations

● Query: Find the names of all customers with an account at a Brooklyn branch whose
account balance is over $1000.
Πcustomer_name((σbranch_city = “Brooklyn” ∧ balance > 1000(branch (account depositor)))

● Transformation using join associatively (Rule 6a) as follows provides an


opportunity to apply the “perform selections early” rule, resulting in the subexpression
:

Πcustomer_name((σbranch_city = “Brooklyn” ∧ balance > 1000 (branch account)) depositor)

● The next slide shows pictorial representation of above query.

| 15
Multiple Transformations (Cont.)

Pushing
selection
early

| 16
Transformation Example: Pushing Projections
• Query: Find the names of all customers with an account at a Brooklyn
branch.
Πcustomer_name((σbranch_city = “Brooklyn” (branch) account) depositor)
● When we compute
(σbranch_city = “Brooklyn” (branch) account )

we obtain a relation whose schema is:


(branch_name, branch_city, assets, account_number, balance)

● Instead of having all the above attributes in the intermediate schema, push
projections as follows using equivalence rules 8a and 8b (project only
account_number); eliminates unneeded attributes.

Πcustomer_name (( Πaccount_number ( (σbranch_city = “Brooklyn” (branch) account )) depositor )

● Performing the projection as early as possible reduces the size of the relation to
be joined. | 17
Join Ordering Example

● For all relations r1, r2, and r3,


(r1 r2) r3 = r1 (r2 r3 )
(Join Associativity)

● If r2 r3 is quite large and r1 r2 is small, we choose

(r1 r2) r3
so that we compute and store a smaller temporary relation.

| 18
Join Ordering Example (Cont.)
● Consider the expression
Πcustomer_name ((σbranch_city = “Brooklyn” (branch)) (account depositor))

● Could compute account depositor first, and join result with σbranch_city = “Brooklyn”
(branch)

but account depositor is likely to be a large relation.

● Only a small fraction of the bank’s customers are likely to have accounts in
branches located in Brooklyn
● it is better to compute
σbranch_city = “Brooklyn” (branch) account
first.

| 19
Enumeration of Equivalent Expressions

● Query optimizers use equivalence rules to systematically generate expressions


equivalent to the given expression.

● Can generate all equivalent expressions as follows:


● Repeat
- apply all applicable equivalence rules on every equivalent expression found so
far
- add newly generated expressions to the set of equivalent expressions
Until no new equivalent expressions are generated above

● The above approach is very expensive in space and time


● Two approaches
- Optimized plan generation based on transformation rules
- Special case approach for queries with only selections, projections and joins

| 20
Implementing Transformation Based Optimization
● Space requirements reduced by sharing common sub-expressions:
● when E1 is generated from E2 by an equivalence rule, usually only the top level of the two
are different, subtrees below are the same and can be shared using pointers
• E.g. when applying join commutativity (Both E1 join E2 and E2 join E1 will give you the
same result)

E E
1 2

● Same sub-expression may get generated multiple times


• Detect duplicate sub-expressions and share one copy
● Time requirements are reduced by not generating all expressions
● Dynamic programming

| 21
Cost Estimation
● Cost of each operator computed as described in previous chapter.
● Need statistics of input relations
• E.g. number of tuples, sizes of tuples
● Inputs can be results of sub-expressions
● Need to estimate statistics of expression results
● To do so, we require additional statistics
• E.g. number of distinct values for an attribute

| 22
Choice of Evaluation Plans
● Must consider the interaction of evaluation techniques when choosing evaluation plans
● choosing the cheapest algorithm for each operation independently may not
yield best overall algorithm. E.g.
• merge-join may be costlier than hash-join, but may provide a sorted output
which reduces the cost for an outer level aggregation.
• nested-loop join may provide opportunity for pipelining
● Practical query optimizers incorporate elements of the following two broad
approaches:
1. Search all the plans and choose the best plan in a cost-based fashion.
2. Uses heuristics to choose a plan.

| 23
Cost-Based Optimization
● Consider finding the best join-order for r1 r2 . . . r n.
● There are (2(n – 1))!/(n – 1)! different join orders for above expression. For ex.,
with n=5, number is 1680; with n = 10, the number is greater than 176 billion!
● With n = 3, the number is 12. Therefore, if R1 , R2 and R3 are 3 relations, there will
be total 12 join orders as following.
r1, (r2, r3) r1, (r3, r2) (r2, r3), r1 (r3, r2), r1
r2, (r1, r3) r2, (r3, r1) (r1, r3), r2 (r3, r1), r2
r3, (r1, r2) r3, (r2, r1) (r1, r2), r3 (r2, r1), r3
● No need to generate all the join orders. Using dynamic programming, the
least-cost join order for any subset of {r1, r2, . . . rn} is computed only
once and stored for future use.

| 24
Optimization

● To find best join tree for a set of n relations:


● To find best plan for a set S of n relations, consider all possible plans of the form: S1
(S – S1) where S1 is any non-empty subset of S.
● Recursively compute costs for joining subsets of S to find the cost of each plan.
Choose the cheapest of the 2n – 1 alternatives.
● When plan for any subset is computed, store it and reuse it when it is required again,
instead of recomputing it
• Dynamic programming : Dynamic programming algorithm stores results of
computations and reuse them, a procedure that can reduce execution time greatly.
For example, suppose we want to find the best join order of the five
relations r1, r2, r3 , r4
and r5, which represents all join orders where r1, r2, andr3 are joined first (in
some order
out of 12), and the result is joined (in some order) with r4 and r5. There are 12
different join
orders for computing r1 r2 r3, and 12 orders for computing the join of this result
with r4 and r5.
Thus, 12*12= 144 join orders to examine. However, once we have found the best
| 25
join order for the
Left Deep Join Trees
● In left-deep join trees, the right-hand-side input for each join is a relation, not the
result of an intermediate join. The advantage of using left-deep join tree is, the
output from the intermediate operations are not stored, but they are
pipelined so that they can be directly given as an input in the next operation.
Thus, it saves space.

| 26
Interesting Sort Orders

● Consider the expression (r1 r2) r3 (with A as common attribute)


● An interesting sort order is a particular sort order of tuples that could be
useful for a later operation
● Using merge-join to compute r1 r2 may be costlier than hash join but generates
result sorted on attribute A which in turn may make merge-join with r3 cheaper,
which may reduce cost of join with r3 and minimizing overall cost.
● Sort order may also be useful for order by and for grouping

| 27
Heuristic Optimization
● Cost-based optimization is expensive, even with dynamic programming.
● Systems may use heuristics to reduce the number of choices that must be
made in a cost-based fashion.
● Heuristic optimization transforms the query-tree by using a set of rules that typically
(but not in all cases) improve execution performance:
● Perform selection early (reduces the number of tuples)
● Perform projection early (reduces the number of attributes)
● Perform most restrictive selection and join operations (i.e. with smallest result
size) before other similar operations.
● Some systems use only heuristics, others combine heuristics with partial cost-
based optimization.

| 28
Structure of Query Optimizers
● Many optimizers considers left-deep join orders.
● Plus heuristics to push selections and projections down the query tree
● Reduces optimization complexity and generates plans amenable to pipelined
evaluation.

● Some query optimizers integrate heuristic selection and the generation of


alternative access plans.
● Frequently used approach
• heuristic rewriting of nested block structure and aggregation
• followed by cost-based join-order optimization for each block
● Some optimizers (e.g. SQL Server) apply transformations to entire query and do
not depend on block structure

BUT
● Even with the use of heuristics, cost-based query optimization imposes a substantial
overhead.
● But is worth it for expensive queries
| 29
● Optimizers often use simple heuristics for very cheap queries, and perform
End of Chapter

| 30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy