0% found this document useful (0 votes)
19 views8 pages

Database

The document outlines key concepts in database design and management, including data models, business rules, SQL-based applications, and the characteristics of Big Data. It emphasizes the importance of normalization, transaction management, and the role of DBMS in ensuring data integrity and consistency. Additionally, it discusses various database design considerations, the significance of keys, relationships, and the techniques for optimizing database performance.

Uploaded by

Tea Bleta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views8 pages

Database

The document outlines key concepts in database design and management, including data models, business rules, SQL-based applications, and the characteristics of Big Data. It emphasizes the importance of normalization, transaction management, and the role of DBMS in ensuring data integrity and consistency. Additionally, it discusses various database design considerations, the significance of keys, relationships, and the techniques for optimizing database performance.

Uploaded by

Tea Bleta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

1.

What components should an implementation-ready data model


contain?

An entity is a person, place, thing, concept, or event about which data will be
collected and stored • An attribute is a characteristic of an entity • A
relationship describes an association among entities • The following are
three different types of relationships: • One-to-many (1:M or 1..*) relationship
• Many-to-many (M:N or *..*) relationship • One-to-one (1:1 or 1..1)
relationship. • A constraint is a restriction placed on the data • Constraints
help ensure data integrity •Constraints are normally expressed in the form of
rules.

2. What do business rules require to be effective?

A business rule is a brief, precise, and unambiguous description of a policy,


procedure, or principle within a specific organization • They apply to any
organization that stores and uses data to generate information • Business
rules are used to define entities, attributes, relationships, and constraints •
They must be easy to understand and widely disseminated.

3. What are the sources of business rules, and what is the database
designer's role with regard to business rules?

• The main sources of business rules are company managers, policy makers,
department managers, and written documentation such as company
procedures. • Business rules are essential to database design due to the
following reasons: • It helps to standardize the company’s view of data • It
can be a communication tool between users and designers • It allows the
designer to understand the nature, role, and scope of the data • It allows the
designer to understand business processes • It allows the designer to
develop appropriate relationship participation rules and constraints and to
create an accurate data model

4. Describe the three parts involved in any SQL-based relational database


application.

The end user interface – the interface allows the end user to interact with the
data 2. A collection of tables stored in the database – the tables “present”
the data to the end user in a way that is easy to understand 3. SQL engine –
the SQL engine executes all queries or data requests

5. Describe the three basic characteristics of Big Data databases.


• A basic characteristic of Big Data databases can be described as volume,
velocity, and variety, or the 3 Vs. volume - space to store all the data being
generated. A company's business response time is reflective of the velocity
of their Big Data database storage and processing. Variety refers to the fact
that data is collected in multiple data formats

6. Describe what metadata are and what value they provide to the
database system

• Metadata, or data about data, through which the end-user data is


integrated and managed • Metadata describes the data characteristics and
the set of relationships that links the data found within the database

7. What are the advantages of having the DBMS between the end user's
applications and the database?

• The DBMS presents the end user with a single, integrated view of the data
in the database • A DBMS provides the following advantages: • Improved
data sharing • Improved data security • Better data integration • Minimized
data inconsistency • Improved data access • Improved decision making •
Increased end-user productivity

8. Discuss some considerations when designing a database.

• Database design refers to the activities that focus on the design of the
database structure that will be used to store and manage end-user data •
Designing appropriate data repositories of integrated information using the
two-dimensional table structures found in most databases is a process of
decomposition • The integrated data must be decomposed properly into its
constituent parts • A well-designed database facilitates data management
and generates accurate and valuable information • A poorly designed
database causes difficult-to-trace errors that may lead to poor decision
making

9. What are the problems associated with file systems? How do they
challenge the types of information that can be created from the data
as well as the accuracy of the information?

• The following problems with file systems challenge the types of information
that can be created from data as well as information accuracy: • Lengthy
development times • Difficulty of getting quick answers • Complex system
administration • Lack of security and limited data sharing • Extensive
programming

10. Discuss any three functions performed by the DBMS that


guarantee the integrity and consistency of the data in the database.

Data dictionary management – The DBMS stores definitions of data elements


and their relationships in a data dictionary 2. Data storage management –
The DBMS creates and manages the structures required for data storage •
Performance tuning ensures efficient performance 3. Data transformation
and presentation – The DBMS transforms entered data to conform to required
data structures • Data is formatted to conform to the user’s logical
expectations 4. Security management – The DBMS creates a system that
enforces user security and data privacy

11. What is a key and how is it important in a relational model?

• A key consists of one or more attributes that determine other attributes •


Keys are important because they are used to ensure that each row in a table
is uniquely identifiable • They are also used to establish relationships among
tables and to ensure the integrity of the data. The role of a key is based on
the concept of determination, which is the state in which knowing the value
of one attribute helps to determine the value of another

12. Define entity integrity. What are the two requirements to ensure
entity integrity?

Entity integrity is the condition in which each row in the table has its own
known, unique identity. Requirement: All primary key entries are unique, and
no part of a primary key may be null. Purpose: Each row will have a known,
unique identity, and foreign key values can properly reference primary key
values. Example: No invoice can have a duplicate number, nor can it be null;
in short, all invoices are uniquely identified by their invoice number.

13. Describe the use of null values in a database.

• A null is the absence of any data value, and it is never allowed in any part
of a primary key. A null could represent any of the following: • An unknown
attribute value • A known, but missing, attribute value • A “not applicable”
condition.

14. Describe the use of the INTERSECT operator.

INTERSECT is an operator used to yield only the rows that are common to
two union compatible tables. As with UNION, the tables must be union-
compatible to yield valid results. For example, you cannot use INTERSECT if
one of the attributes is numeric and one is character-based. For the rows to
be considered the same in both tables and appear in the result of the
INTERSECT, the entire rows must be exact duplicates.

15. Define an index. Explain the role of indexes in a relational


database.

An index is an orderly arrangement to logically access rows in a table • The


index key is the index’s reference point that leads to data location identified
by the key • In a unique index, the index key can have only one pointer value
associated with it • A table can have many indexes, but each index is
associated with only one table • The index key can have multiple attributes

16. Explain multivalued attributes with the help of examples. How


are multivalued attributes indicated in the Chen Entity Relationship
model?

Multivalued attributes are attributes that have many values • Implementing


multivalued attributes • Create several new attributes, one for each
component of the original multivalued attribute • Create a new entity
composed of the original multivalued attribute’s components. A Multivalued
Attribute in An Entity ex. A car’s color may be subdivided into many colors
for the roof, body, and trim.

17. What is a weak relationship? Provide an example

Weak (Non-identifying) Relationships • A weak relationship exists if the


primary key of the related entity does not contain a primary key component
of the parent entity. By default, relationships are established by having the
primary key of the parent entity appear as a foreign key (FK) on the related
entity (also known as the child entity. suppose the 1:M relationship between
COURSE and CLASS is defined as: COURSE (CRS_CODE, DEPT_CODE,
CRS_DESCRIPTION, CRS_CREDIT) CLASS (CLASS_CODE, CRS_CODE,
CLASS_SECTION, CLASS_TIME, ROOM_CODE, PROF_NUM). In this example,
the CLASS primary key did not inherit a primary key component from the
COURSE entity. In this case, a weak relationship exists between COURSE and
CLASS because CRS_CODE (the primary key of the parent entity) is only a
foreign key in the CLASS entity

18. Explain mandatory participation in an entity relationship.

Participation in an entity relationship is either optional or mandatory. Recall


that relationships are bidirectional; that is, they operate in both directions.
Mandatory participation means that one entity occurrence requires a
corresponding entity occurrence in a particular relationship. If no optionality
symbol is depicted with the entity, the entity is assumed to exist in a
mandatory relationship with the related entity. The existence of a mandatory
relationship indicates that the minimum cardinality is at least 1 for the
mandatory entity.

19. What is a ternary relationship? Provide some business rules


examples that specify the need for a ternary or higher-order
relationship.

A ternary relationship implies an association among three different entities. •


A DOCTOR writes one or more PRESCRIPTIONs. • A PATIENT may receive one
or more PRESCRIPTIONs. • A DRUG may appear in one or more
PRESCRIPTIONs. (To simplify this example, assume that the business rule
states that each prescription contains only one drug. In short, if a doctor
prescribes more than one drug, a separate prescription must be written for
each drug.)

20. Explain recursive relationships with the help of an example

A recursive relationship is a relationship within a single entity type. • A


recursive relationship can exist between occurrences of the same entity set •
Naturally, such a condition is found within a unary relationship • One
common pitfall when working with unary relationships is to confuse
participation with referential integrity • Similar because they are both
implemented through constraints on the same set of attributes

21. Explain normalization and its different forms

Normalization is a process for evaluating and correcting table structures to


minimize data redundancies • It reduces the likelihood of data anomalies •
Assigns attributes to tables based on determination • Normalization works
through a series of stages called normal forms and the first three are
described as follows: 1. First normal form (1NF) 2. Second normal form (2NF)
3. Third normal form (3NF).

The term 1NF describes the tabular format in which the following occur: • All
key attributes are defined • There are no repeating groups in the table • All
attributes are dependent on the primary key • All relational tables satisfy
1NF requirements

Conversion to 2NF occurs only when the 1NF has a composite primary key •
If the 1NF has a single-attribute primary key, then the table is automatically
in 2NF • The 1NF-to-2NF conversion is simple, you take the following steps: •
Step 1: Make new tables to eliminate partial dependencies • Step 2:
Reassign corresponding dependent attributes • A table is in 2NF under the
following circumstances : • When it is in 1NF • When it includes no partial
dependencies

• The data anomalies created by the database organization are easily


eliminated by completing the following two steps: • Step 1: Make new tables
to eliminate transitive dependencies • Step 2: Reassign corresponding
dependent attributes • A table is in 3NF under the following circumstances: •
When it is in 2NF • When it contains no transitive dependencies

22. What steps are involved in the conversion to the third normal
form?

• Step 1: Make new tables to eliminate transitive dependencies. For every


transitive dependency, write a copy of its determinant as a primary key for a
new table. A determinant is any attribute whose value determines other
values within a row. If you have three different transitive dependencies, you
will have three different determinants. As with the conversion to 2NF, it is
important that the determinant remain in the original table to serve as a
foreign key. • Step 2: Reassign corresponding dependent attributes. identify
the attributes that are dependent on each determinant identified in Step 1.
Place the dependent attributes in the new tables with their determinants and
remove them from their original tables.

23. What is transaction isolation and why it is important?

Isolation means that the data used during the execution of a transaction
cannot be used by a second transaction until the first one is completed. In
other words, if transaction T1 is being executed and is using the data item X,
that data item cannot be accessed by any other transaction until T1 ends.
This property is particularly useful in multiuser database environments
because several users can access and update the database at the same
time.

24. How does a shared/exclusive lock schema increase the lock


manager's overhead?
An exclusive lock exists when access is reserved specifically for the
transaction that locked the object. The exclusive lock must be used when the
potential for conflict exists. A shared lock exists when concurrent
transactions are granted read access on the basis of a common lock. A
shared lock produces no conflict as long as all the concurrent transactions
are read-only. A shared lock is issued when a transaction wants to read data
from the database and no exclusive lock is held on that data item. An
exclusive lock is issued when a transaction wants to update (write) a data
item and no locks are currently held on that data item by any other
transaction. Using the shared/exclusive locking concept, a lock can have
three states: unlocked, shared (read), and exclusive (write).

25. What are the three basic techniques to control deadlocks?

A deadlock occurs when two transactions wait indefinitely for each other to
unlock data. • The three basic techniques to control deadlocks are the
following: • Deadlock prevention • Deadlock detection • Deadlock avoidance
• The choice of which deadlock control method to use depends on the
database environment

26. What are database checkpoints?

• Database checkpoints are operations in which the DBMS writes all of its
updated buffers in memory to disk. While this is happening, the DBMS does
not execute any other requests. A checkpoint operation is also registered in
the transaction log. As a result, the physical database and the transaction log
will be in sync. This synchronization is required because update operations
update the copy of the data in the buffers and not in the physical database.
Checkpoints are automatically and periodically executed by the DBMS
according to certain operational parameters but can also be executed
explicitly (as part of a database transaction statement) or implicitly (as part
of a database backup operation).

27. How do transaction recovery procedures use the deferred-write


and write-through techniques to recover transactions?

• Transaction recovery procedures generally make use of the following: •


Deferred-write technique or deferred update • Transaction operations do not
immediately update the physical database • Only transaction log is updated
• Write-through technique or immediate update • The database is
immediately updated by transaction operations during transaction’s
execution

28. What are the modes that an optimizer can create and describe
query optimization?

The query optimizer can operate in one of two modes: • A rule-based


optimizer uses rules and points to determine the best approach to execute a
query • The rules assign a “fixed cost” to each SQL operation • A cost-based
optimizer uses algorithms based on statistics about the objects being
accessed to determine the best approach to execute a query • The optimizer
process adds up the processing cost, I/O costs, and resource costs (RAM and
temporary space) to determine the total cost of a given execution plan

29. Your management team wants to know why they need to


optimize a DBMS with SQL performance tuning, even though they
automatically optimize SQL queries.

SQL performance tuning is evaluated from the client perspective. • Most


current-generation relational DBMSs perform automatic query optimization at
the server end. • Most SQL performance optimization techniques are DBMS-
specific and, therefore, are rarely portable, even across different versions of
the same DBMS. Part of the reason for this behavior is the constant
advancement in database technologies.

The DBMS uses general optimization techniques rather than focusing on


specific techniques dictated by the special circumstances of the query
execution.) A poorly written SQL query can, and usually will, bring the
database system to its knees from a performance point of view. The majority
of current database performance problems are related to poorly written SQL
code. Therefore, although a DBMS provides general optimizing services, a
carefully written query almost always outperforms a poorly written one.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy