0% found this document useful (0 votes)
7 views20 pages

Cos 906 (Database Design and Implementation)

This seminar paper discusses the architecture of Database Management Systems (DBMSs) and their performance influenced by process models, memory coordination, and relational query processors. It outlines the main components of relational database systems, explores various process models including process per worker, thread per worker, and process pool, and examines parallel architectures like shared memory, shared-nothing, and shared-disk systems. The paper emphasizes the importance of system design principles and the challenges faced in optimizing DBMS performance and scalability.

Uploaded by

chineduf441
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views20 pages

Cos 906 (Database Design and Implementation)

This seminar paper discusses the architecture of Database Management Systems (DBMSs) and their performance influenced by process models, memory coordination, and relational query processors. It outlines the main components of relational database systems, explores various process models including process per worker, thread per worker, and process pool, and examines parallel architectures like shared memory, shared-nothing, and shared-disk systems. The paper emphasizes the importance of system design principles and the challenges faced in optimizing DBMS performance and scalability.

Uploaded by

chineduf441
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

UNIVERSITY OF NIGERIA, NSUKKA

SCHOOL OF POST GRADUATE STUDIES

FACULTY OF PHYSICAL SCIENCES


DEPARTMENT OF COMPUTER SCIENCE

TOPIC:
ARCHITECTURE OF DATABASE SYSTEM AND INFLUENCE ON
PERFORMANCE WITH EMPHASIS ON PROCESS MODEL, PROCESS
AND MEMORY COORDINATION, AND RELATIONAL QUERY
PROCESSOR

A SEMINAR PAPER
PRESENTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR
THE COURSE: COS 906
(DATABASE DESIGN AND IMPLEMENTATION)

BY

UGWU FELIX CHINEDU


PG/PHD/16/82651

LECTURER: DR M. C. OKORONKWO

JANUARY, 2018
Abstract
Database Management Systems (DBMSs) are a ubiquitous and critical component of modern
computing, and the result of decades of research and development in both academia and industry.
Historically, DBMSs were among the earliest multi-user server systems to be developed, and
thus pioneered many systems design techniques for scalability and reliability now in use in many
other contexts. While many of the algorithms and abstractions used by a DBMS are textbook
material, there has been relatively sparse coverage in the literature of the systems design issues
that make a DBMS work. This paper presents an architectural discussion of DBMS design
principles, including process models, parallel architecture, storage system design, transaction
system implementation, query processor and optimizer architectures, and typical shared
components and utilities. Successful commercial and open-source systems are used as points of
reference, particularly when multiple alternative designs have been adopted by different groups.

2
INTRODUCTION
1.0 Introduction
Database Management Systems (DBMSs) are complex, mission-critical software systems.
Database systems were among the earliest widely deployed online server systems and, as such,
have pioneered design solutions spanning not only data management, but also applications,
operating systems, and networked services. In this paper, we attempt to capture the main
architectural aspects of modern database systems. Our goal here is to focus on overall system
design and provide useful context for more widely known algorithms and concepts.

1.1 Relational Systems: The Life of a Query

The most mature and widely used database systems in production today are relational database
management systems (RDBMSs). These systems can be found at the core of much of the world’s
application infrastructure including e-commerce, medical records, billing, human resources,
payroll, customer relationship management and supply chain management, to name a few. The
advent of web-based commerce and community-oriented sites has only increased the volume and
breadth of their use. Relational systems serve as the repositories of record behind nearly all
online transactions and most online content management systems (blogs, wikis, social networks,
and the like). In addition to being important software infrastructure, relational database systems
serve as a well-understood point of reference for new extensions and revolutions in database
systems that may arise in the future. As a result, we focus on relational database systems
throughout this paper.

3
Fig. 1.1 Main components of a DBMS.

At heart, a typical RDBMS has five main components, as illustrated in Figure 1.1. As an
introduction to each of these components and the way they fit together, we step through the life
of a query in a database system. This also serves as an overview of the remaining sections of the
paper.

4
2.0 Process Models
When designing any multi-user server, early decisions need to be made regarding the execution
of concurrent user requests and how these are mapped to operating system processes or threads.
These decisions have a profound influence on the software architecture of the system, and on its
performance, scalability, and portability across operating systems. In this section, we survey a
number of options for DBMS process models, which serve as a template for many other highly
concurrent server systems. In this simplified context, a DBMS has three natural process model
options. From the simplest to the most complex, these are: (1) process per DBMS worker, (2)
thread per DBMS worker, and (3) process pool. Although these models are simplified, all three
are in use by commercial DBMS systems today.

2.1 Uniprocessors and Lightweight Threads

2.1.1 Process per DBMS Worker


This model is relatively easy to implement since DBMS workers are mapped directly onto OS
processes. The OS scheduler manages the timesharing of DBMS workers and the DBMS
programmer can rely on OS protection facilities to isolate standard bugs like memory overruns.
Complicating this model are the in-memory data structures that are shared across DBMS
connections, including the lock table and buffer pool. These shared data structures must be
explicitly allocated in OS-supported shared memory accessible across all DBMS processes. This
requires OS support (which is widely available) and some special DBMS coding. In practice, the

5
Fig. 2.1 Process per DBMS worker model: each DBMS worker is implemented as an OS
process.

required extensive use of shared memory in this model reduces some of the advantages of
address space separation, given that a good fraction of “interesting” memory is shared across
processes. In terms of scaling to very large numbers of concurrent connections, process per
DBMS worker is not the most attractive process model. The scaling issues arise because a
process has more state than a thread and consequently consumes more memory. A process
switch requires switching security context, memory manager state, file and network handle
tables, and other process context.

2.1.2 Thread per DBMS Worker


In the thread per DBMS worker model (Figure 2.2), a single multithreaded process hosts all the
DBMS worker activity. A dispatcher

6
Fig. 2.2 Thread per DBMS worker model: each DBMS worker is implemented as an OS thread.

thread (or a small handful of such threads) listens for new DBMS client connections. Each
connection is allocated a new thread. As each client submits SQL requests, the request is
executed entirely by its corresponding thread running a DBMS worker. This thread runs within
the DBMS process and, once complete, the result is returned to the client and the thread waits on
the connection for the next request from that same client. The usual multi-threaded programming
challenges arise in this architecture: the OS does not protect threads from each other’s memory
overruns and stray pointers; debugging is tricky, especially with race conditions; and the
software can be difficult to port across OS due to differences in threading interfaces and multi-
threaded scaling.
2.1.3 Process Pool
The process pool size is bounded and often fixed. If a request comes in and all processes are
already servicing other requests, the new request must wait for a process to become available.

7
Process pool has all of the advantages of process per DBMS worker but, since a much smaller
number of processes are required, is considerably more memory efficient. Process pool is often
implemented with a dynamically resizable process pool where the pool grows potentially to
some maximum number when a large number of concurrent requests arrive. When the request
load is lighter, the process pool can be reduced to fewer waiting processes. As with thread per
DBMS worker, the process pool model is also supported by a several current generation DBMS
in use today.

Fig. 2.3 Process Pool: each DBMS Worker is allocated to one of a pool of OS processes as work
requests arrive from the Client and the process is returned to the pool once the request is
processed.

8
3.0 Parallel Architecture: Processes and Memory Coordination
In this section, we summarize the standard DBMS terminology and discuss the process models
and memory coordination issues in each.
3.1 Shared Memory
A shared-memory parallel system (Figure 3.1) is one in which all processors can access the same
RAM and disk with roughly the same performance. This architecture is fairly standard today
most server hardware ships with between two and eight processors. High-end machines can ship
with dozens of processors, but tend to be sold at a large premium relative to the processing
resources provided. Highly parallel shared-memory machines are one of the last remaining “cash
cows” in the hardware industry, and are used heavily in high-end online transaction processing
applications. The cost of server hardware is usually dwarfed by costs of administering the
systems, so the expense of

Fig. 3.1 Shared-memory architecture.


buying a smaller number of large, very expensive systems is sometimes viewed to be an
acceptable trade-off.1
Multi-core processors support multiple processing cores on a single chip and share some
infrastructure such as caches and the memory bus. This makes them quite similar to a shared-
memory architecture in terms of their programming model. Today, nearly all serious database
deployments involve multiple processors, with each processor having more than one CPU.
DBMS architectures need to be able to fully exploit this potential parallelism. Fortunately, all

9
three of the DBMS architectures described in Section 2 run well on modern shared-memory
hardware architectures. The process model for shared-memory machines follows quite naturally
from the uniprocessor approach. In fact, most database systems evolved from their initial
uniprocessor implementations to shared-memory implementations. On shared-memory
machines, the OS typically supports the transparent assignment of workers (processes or threads)
across the processors, and the shared data structures continue to be accessible to all. All three
models run well on these systems and support the execution of multiple, independent SQL
requests in parallel. The main challenge is to modify the query execution layers to take
advantage of the ability to parallelize a single query across multiple CPUs.
3.2 Shared-Nothing
A shared-nothing parallel system (Figure 3.2) is made up of a cluster of independent machines
that communicate over a high-speed network interconnect or, increasingly frequently, over
commodity networking components. There is no way for a given system to directly access the
memory or disk of another system. Shared-nothing systems provide no hardware sharing
abstractions, leaving coordination of the various machines entirely in the hands of the DBMS.
The most common technique employed by DBMSs to support these clusters is to run their
standard process model on each machine, or node, in the cluster. Each node is capable of
accepting client SQL

10
Fig. 3.2 Shared-nothing architecture.
requests, accessing necessary metadata, compiling SQL requests, and performing data access just
as on a single shared memory system as described above. The main difference is that each
system in the cluster stores only a portion of the data. Rather than running the queries they
receive against their local data only, the requests are sent to other members of the cluster and all
machines involved execute the query in parallel against the data they are storing. The tables are
spread over multiple systems in the cluster using horizontal data partitioning to allow each
processor to execute independently of the others. Each tuple in the database is assigned to an
individual machine, and hence each table is sliced “horizontally” and spread across the
machines. Typical data partitioning schemes include hash-based partitioning by tuple attribute,
range-based partitioning by tuple attribute, round-robin, and hybrid which is a combination of
both range-based and hash-based. Each individual machine is responsible for the access, locking
and logging of the data on its local disks. During query execution, the query optimizer chooses
how to horizontally re-partition tables and intermediate results across the machines to satisfy the
query, and it assigns each machine a logical partition of the work. The query executors on the
various machines ship data requests and tuples to each other, but do not need to transfer any
thread state or other low-level information. As a result of this value-based partitioning of the

11
database tuples, minimal coordination is required in these systems. Good partitioning of the data
is required, however, for good performance. This places a significant burden on the Database
Administrator (DBA) to lay out tables intelligently, and on the query optimizer to do a good job
partitioning the workload. This simple partitioning solution does not handle all issues in the
DBMS. For example, explicit cross-processor coordination must take place to handle transaction
completion, provide load balancing, and support certain maintenance tasks. For example, the
processors must exchange explicit control messages for issues like distributed deadlock detection
and two-phase commit [1]. This requires additional logic, and can be a performance bottleneck if
not done carefully. Also failure, partial is a possibility that has to be managed in a shared-nothing
system. In a shared-memory system, the failure of a processor typically results in shutdown of
the entire machine, and hence the entire DBMS. In a shared-nothing system, the failure of a
single node will not necessarily affect other nodes in the cluster. But it will certainly affect the
overall behavior of the DBMS, since the failed node hosts some fraction of the data in the
database. There are at least three possible approaches in this scenario. The first is to bring down
all nodes if any node fails; this in essence emulates what would happen in a shared-memory
system. The second approach, which Informix dubbed “Data Skip,” allows queries to be
executed on any nodes that are up, “skipping” the data on the failed node. This is useful in
scenarios where data availability is more important than completeness of results. But best-effort
results do not have well-defined semantics, and for many workloads this is not a useful choice
particularly because the DBMS is often used as the “repository of record” in a multi-tier system,
and availability-vs-consistency trade-offs tend to get done in a higher tier (often in an application
server). The third approach is to employ redundancy schemes ranging from full database failover
(requiring double the number of machines and software licenses) to fine-grain redundancy like
chained declustering [2]. In this latter technique, tuple copies are spread across multiple nodes in
the cluster. The advantage of chained declustering over simpler schemes is that (a) it requires
fewer machines to be deployed to guarantee availability than na¨ıve schemes, and (b) when a
node does fails, the system load is distributed fairly evenly over the remaining nodes: the n − 1
remaining nodes each do n/(n − 1) of the original work, and this form of linear degradation in
performance continues as nodes fail. In practice, most current generation commercial systems are
somewhere in the middle, neither as coarse-grained as full database redundancy nor as fine-
grained as chained declustering The shared-nothing architecture is fairly common today, and has

12
unbeatable scalability and cost characteristics. It is mostly used at the extreme high end, typically
for decision-support applications and data warehouses. In an interesting combination of hardware
architectures, a shared-nothing cluster is often made up of many nodes each of which is a shared-
memory multi-processors.
3.3 Shared-Disk
A shared-disk parallel system (Figure 3.3) is one in which all processors can access the disks
with about the same performance, but are unable to access each other’s RAM. This architecture
is quite common with two prominent examples being Oracle RAC and DB2 for zSeries
SYSPLEX. Shared-disk has become more common in recent years with the increasing popularity
of Storage Area Networks (SAN). A SAN allows one or more logical disks to be mounted by one
or more host systems making it easy to create shared disk configurations.
One potential advantage of shared-disk over shared-nothing systems is their lower cost of
administration. DBAs of shared-disk systems do not have to consider partitioning tables across
machines in order to achieve parallelism. But very large databases still typically do require
partitioning so, at this scale, the difference becomes less pronounced. Another compelling
feature of the shared-disk architecture is that the failure of a single DBMS processing node does
not affect the other nodes’ ability to access the entire database. This is in contrast to both shared-
memory systems that fail as a unit, and shared-nothing systems that lose access to at least some
data upon a node failure (unless some alternative data redundancy scheme is used). However,
even with these advantages, shared-disk systems are still vulnerable to some single

Fig. 3.3 Shared-disk architecture.

13
points of failure. If the data is damaged or otherwise corrupted by hardware or software failure
before reaching the storage subsystem, then all nodes in the system will have access to only this
corrupt page. If the storage subsystem is using RAID or other data redundancy techniques, the
corrupt page will be redundantly stored but still corrupt in all copies. Because no partitioning of
the data is required in a shared-disk system, data can be copied into RAM and modified on
multiple machines.
Unlike shared-memory systems, there is no natural memory location to coordinate this sharing of
the data each machine has its own local memory for locks and buffer pool pages. Hence explicit
coordination of data sharing across the machines is needed. Shared-disk systems depend upon a
distributed lock manager facility, and a cache coherency protocol for managing the distributed
buffer pools [3]. These are complex software components, and can be bottlenecks for workloads
with significant contention. Some systems such as the IBM zSeries SYSPLEX implement the
lock manager in a hardware subsystem.
3.4 Non-Uniform Memory Access (NUMA)
Non-Uniform Memory Access (NUMA) systems provide a shared memory programming model
over a cluster of systems with independent memories. Each system in the cluster can access its
own local memory quickly, whereas remote memory access across the high-speed cluster
interconnect is somewhat delayed. The architecture name comes from this non-uniformity of
memory access times. NUMA hardware architectures are an interesting middle ground between
shared-nothing and shared-memory systems. They are much easier to program than shared-
nothing clusters, and also scale to more processors than shared-memory systems by avoiding
shared points of contention such as shared-memory buses. NUMA clusters have not been broadly
successful commercially but one area where NUMA design concepts have been adopted is
shared memory multi-processors (Section 3.1). As shared memory multi-processors have scaled
up to larger numbers of processors, they have shown increasing non-uniformity in their memory
architectures. Often the memory of large shared memory multi-processors is divided into
sections and each section is associated with a small subset of the processors in the system. Each
combined subset of memory and CPUs is often referred to as a pod. Each processor can access
local pod memory slightly faster than remote pod memory. This use of the NUMA design pattern
has allowed shared memory systems to scale to very large numbers of processors. As a
consequence, NUMA shared memory multi-processors are now very common whereas NUMA

14
clusters have never achieved any significant market share. One way that DBMSs can run on
NUMA shared memory systems is by ignoring the non-uniformity of memory access. This works
acceptably provided the non-uniformity is minor. When the ratio of near memory to far-memory
access times rises above the 1.5:1 to 2:1 range, the DBMS needs to employ optimizations to
avoid serious memory access bottlenecks. These optimizations come in a variety of forms, but all
follow the same basic approach: (a) when allocating memory for use by a processor, use memory
local to that processor (avoid use of far memory) and (b) ensure that a given DBMS worker is
always scheduled if possible on the same hardware processor it was on previously. This
combination allows DBMS workloads to run well on high scale, shared memory systems having
some non-uniformity of memory access times. Although NUMA clusters have all but
disappeared, the programming model and optimization techniques remain important to current
generation DBMS systems since many high-scale shared memory systems have significant non-
uniformity in their memory access performance.

15
4.0 Relational Query Processor
A relational query processor takes a declarative SQL statement, validates it, optimizes it into a
procedural dataflow execution plan, and (subject to admission control) executes that dataflow
program on behalf of a client program. The client program then fetches (“pulls”) the result
tuples, typically one at a time or in small batches. The major components of a relational query
processor are shown in Figure 1.1. In this section, we concern ourselves with both the query
processor and some non-transactional aspects of the storage manager’s access methods. In
general, relational query processing can be viewed as a single-user, single-threaded task. In this
section we focus on the common-case SQL commands: Data Manipulation Language (DML)
statements including SELECT, INSERT, UPDATE, and DELETE. Data Definition Language
(DDL) statements such as CREATE TABLE and CREATE INDEX are typically not processed
by the query optimizer. These statements are usually implemented procedurally in static DBMS
logic through explicit calls to the storage engine and catalog manager.
4.1 Query Parsing and Authorization
Given an SQL statement, the main tasks for the SQL Parser are to (1) check that the query is
correctly specified, (2) resolve names and references, (3) convert the query into the internal
format used by the optimizer, and (4) verify that the user is authorized to execute the query.
Some DBMSs defer some or all security checking to execution time but, even in these systems,
the parser is still responsible for gathering the data needed for the execution-time security check.
4.2 Query Rewrite
The query rewrite module, or rewriter, is responsible for simplifying and normalizing the query
without changing its semantics. It can rely only on the query and on metadata in the catalog, and
cannot access data in the tables. Although we speak of “rewriting” the query, most rewriters
actually operate on an internal representation of the query, rather than on the original SQL
statement text. The query rewrite module usually outputs an internal representation of the query
in the same internal format that it accepted at its input.
4.3 Query Optimizer
In many systems, queries are first broken into SELECT-FROM-WHERE query blocks. The
optimization of each individual query block is then done using techniques similar to those
described in the famous paper by Selinger et al. on the System R optimizer [4]. On completion, a

16
few operators are typically added to the top of each query block as post-processing to compute
GROUP BY, ORDER BY, HAVING and DISTINCT clauses if they exist. The various blocks
are then stitched together in a straightforward fashion. The resulting query plan can be
represented in a number of ways. The original System R prototype compiled query plans into
machine code, whereas the early INGRES prototype generated an interpretable query plan.
Query interpretation was listed as a “mistake” by the INGRES authors in their retrospective
paper in the early 1980’s [5], but Moore’s law and software engineering have vindicated the
INGRES decision to some degree. Ironically, compiling to machine code is listed by some
researchers on the System R project as a mistake. When the System R code base was made into a
commercial DBMS system (SQL/DS) the development team’s first change was to replace the
machine code executor with an interpreter.

4.4 Query Executor


The query executor operates on a fully-specified query plan. This is typically a directed dataflow
graph that connects operators that encapsulate base-table access and various query execution
algorithms. In some systems, this dataflow graph is already compiled into low-level op-codes by
the optimizer. In this case, the query executor is basically a run time interpreter. In other systems,
the query executor receives a representation of the data flow graph and recursively invokes
procedures for the operators based on the graph layout. We focus on this latter case, as the op-
code approach essentially compiles the logic we describe here into a program. Most modern
query executors employ the iterator model that was used in the earliest relational systems.
Iterators are most simply described in an object-oriented fashion. Each iterator specifies its
inputs that define the edges in the dataflow graph. All operators in a query plan, the nodes in the
dataflow graph are implemented as subclasses of the iterator class. The set of subclasses in a
typical system might include filescan, indexscan, sort, nested-loops join, merge-join, hashjoin,
duplicate-elimination, and grouped-aggregation. An important feature of the iterator model is
that any subclass of iterator can be used as input to any other. Hence each iterator’s logic is
independent of its children and parents in the graph, and special-case code for particular
combinations of iterators is not needed. Graefe provides more details on iterators in his query
execution survey [6]. The interested reader is also encouraged to examine the open-source

17
PostgreSQL code base. PostgreSQL utilizes moderately sophisticated implementations of the
iterators for most standard query execution algorithms.

4.6 Data Warehouses


This topic is relevant for two main reasons:
1. Data warehouses are a very important application of DBMS technology. Some claim that
warehouses account for 1/3 of all DBMS activity [7, 8].
2. The conventional query optimization and execution engines discussed so far in this section do
not work well on data warehouses. Hence, extensions or modifications are required to achieve
good performance. Relational DBMSs were first architected in the 1970’s and 1980’s to address
the needs of business data processing applications, since that was the dominant requirement at
the time. In the early 1990’s the market for data warehouses and “business analytics” appeared,
and has grown dramatically since that time. By the 1990’s on-line transaction processing (OLTP)
had replaced batch business data processing as the dominant paradigm for database usage.
Moreover, most OLTP systems had banks of computer operators submitting transactions, either
from phone conversations with the end customer or by performing data entry from paper.
Automated teller machines had become widespread, allowing customers to do certain
interactions directly without operator intervention. Response time for such transactions was
crucial to productivity. Such response time requirements have only become more urgent and
varied today as the web is fast replacing operators with self service by the end customer. About
the same time, enterprises in the retail space had the idea to capture all historical sales
transactions, and to store them typically for one or two years. Such historical sales data can be
used by buyers to figure out “what’s hot and what’s not.” Such information can be leveraged to
affect purchasing patterns. Similarly, such data can be used to decide what items to put on
promotion, which ones to discount, and which ones to send back to the manufacturer. The
common wisdom of the time was that a historical data warehouse in the retail space paid for
itself through better stock management, shelf and store layout in a matter of months. It was clear
at the time that a data warehouse should be deployed on separate hardware from an OLTP
system. Using that methodology, the lengthy (and often unpredictable) business intelligence
queries would not spoil OLTP response time. Also, the nature of data is very different;
warehouses deal with history, OLTP deals with “now.”

18
Conclusion
The task of writing and maintaining a high-performance, fully functional relational DBMS from
scratch is an enormous investment in time and energy. Many of the lessons of relational DBMSs,
however, translate over to new domains. Web services, network-attached storage, text and e-mail
repositories, notification services, and network monitors can all benefit from DBMS research and
experience. Data-intensive services are at the core of computing today, and knowledge of
database system design is a skill that is broadly applicable, both inside and outside the halls of
the main database shops. These new directions raise a number of research problems in database
management as well, and point the way to new interactions between the database community and
other areas of computing.

19
References
[1] J. Gray and A. Reuter, Transaction Processing: Concepts and Techniques. Morgan
Kaufmann, 1993.
[2] H.I. Hsiao and D. J. DeWitt, “Chained declustering: A new availability strategy for
multiprocessor database machines,” in Proceedings of Sixth International Conference on Data
Engineering (ICDE), pp. 456–465, Los Angeles, CA, November 1990.
[3] W. Bridge, A. Joshi, M. Keihl, T. Lahiri, J. Loaiza, and N. MacNaughton, “The oracle
universal server buffer,” in Proceedings of 23rd International Conference on Very Large Data
Bases (VLDB), pp. 590–594, Athens, Greece, August 1997.
[4] P. G. Selinger, M. Astrahan, D. Chamberlin, R. Lorie, and T. Price, “Access path selection in
a relational database management system,” in Proceedings of ACM-SIGMOD International
Conference on Management of Data, pp. 22–34, Boston, June 1979.
[5] M. Stonebraker, “Retrospection on a database system,” ACM Transactions on Database
Systems (TODS), vol. 5, pp. 225–240, 1980.
[6] G. Graefe, “Query evaluation techniques for large databases,” Computing Surveys, vol. 25,
pp. 73–170, 1993.
[7] C. Graham, “Market share: Relational database management systems by operating system,
worldwide, 2005,” Gartner Report No: G00141017, May 2006.
[8] OLAP Market Report. Online manuscript. http://www.olapreport.com/market. htm.
[9] M. Stonebraker, “Inclusion of new types in relational data base systems,” ICDE, pp. 262–
269, 1986.

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy