0% found this document useful (0 votes)
22 views7 pages

Knowledge Discovery in Textual Databases (KDT)

The document discusses using text categorization to structure unstructured text data and enable knowledge discovery techniques. Text articles are tagged with concepts from a hierarchical structure to impose organization. This relatively simple annotation approach aims to balance the need for a rich structure with the limitations of current text processing capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views7 pages

Knowledge Discovery in Textual Databases (KDT)

The document discusses using text categorization to structure unstructured text data and enable knowledge discovery techniques. Text articles are tagged with concepts from a hierarchical structure to impose organization. This relatively simple annotation approach aims to balance the need for a rich structure with the limitations of current text processing capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/2781984

Knowledge Discovery in Textual Databases (KDT)

Article · June 1995


Source: CiteSeer

CITATIONS READS
459 2,795

2 authors:

Ronen Feldman Ido Dagan


Hebrew University of Jerusalem Bar Ilan University
135 PUBLICATIONS 8,518 CITATIONS 290 PUBLICATIONS 14,003 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Recognizing Textual Entailment View project

Factuality Prediction View project

All content following this page was uploaded by Ronen Feldman on 31 December 2012.

The user has requested enhancement of the downloaded file.


Knowledge Discovery in Textual Databases (KDT)

Ronen Feldman and Ido Dagan

Math and Computer Science Dept.


Bar-Ilan University
Ramat-Gan, ISRAEL 52900
{feldman,dagan}@bimacs.cs.biu.ac.il

Abstract guided environment for exploration of data. Among the


The information age is characterized by a rapid growth in the systems that belong to the first group we can mention
amount of information available in electronic media. Traditional EXPLORA (Klosgen, 1992), KDW (Piatetsky-Shapiro
data handling methods are not adequate to cope with this and Matheus, 1992), and Spotlight (Anand and Kahn,
information flood. Knowledge Discovery in Databases (KDD) is
1991). Among the systems the belong to the second group
a new paradigm that focuses on computerized exploration of
large amounts of data and on discovery of relevant and
we can mention IMACS (Brachman et al, 1992) and
interesting patterns within them. While most work on KDD is Nielsen Opportunity Explorer (Anand and Kahn 1993).
concerned with structured databases, it is clear that this Most previous work in knowledge discovery was
paradigm is required for handling the huge amount of concerned with structured databases. In reality a large
information that is available only in unstructured textual form. portion of the available information does not appear in
To apply traditional KDD on texts it is necessary to impose structured databases but rather in collections of text
some structure on the data that would be rich enough to allow articles drawn from various sources. However, before we
for interesting KDD operations. On the other hand, we have to can perform any kind of knowledge discovery in texts we
consider the severe limitations of current text processing
must extract some structured information from them.
technology and define rather simple structures that can be
extracted from texts fairly automatically and in a reasonable
Here we show how the Knowledge Discovery in Texts
cost. We propose using a text categorization paradigm to (KDT) system is using the simplest form of information
annotate text articles with meaningful concepts that are extraction, namely the categorization of the topics of a
organized in hierarchical structure. We suggest that this text by meaningful concepts. While more complex types
relatively simple annotation is rich enough to provide the basis of information have been extracted from texts, most
for a KDD framework, enabling data summarization, notably in the work presented at the series of Message
exploration of interesting patterns, and trend analysis. This Understanding Conferences (MUC), text categorization
research combines the KDD and text categorization paradigms methods were shown to be simple, robust and easy to
and suggests advances to the state of the art in both areas.
reproduce. Therefore text categorization can be
considered as an acceptable pre-requisite for initial KDT
Introduction
efforts, which can be later followed by the incorporation
Knowledge discovery is defined as the nontrivial of more complex data types.
extraction of implicit, previously unknown, and
potentially useful information from given data [Piatetsky- Data Structure: the Concept Hierarchy
Shapiro and Frawley 1991]. Algorithms for knowledge In order to perform KDD tasks it is traditionally
discovery ought to be efficient and discover only required that the data will be structured in some way.
interesting knowledge. In order to be regarded as Furthermore, this structure should reflect the way in
efficient, the complexity of the algorithm must be which the user conceptualize the domain that is
polynomial (with low degree) both in space and time. described by the data.
Algorithms that can not meet this criteria won't be able to Most work on KDD is concerned with structured data
cope with very large databases. Knowledge would be bases, and simply utilizes the given database structure for
regarded as interesting if it provides some nontrivial and the KDD purposes. In the case of unstructured texts, we
useful insight about objects in the database. There are two have to decide which structure to impose on the data. In
main major bodies of work in knowledge discovery. The doing so, we have to consider very carefully the
first is concentrated around applying machine learning following tradeoff. Given the severe limitations of
and statistical analysis techniques towards automatic current technology in robust processing of text we need to
discovery of patterns in knowledge bases, while the other define rather simple structures that can be extracted from
body of work is concentrated around providing a user texts fairly automatically and in a reasonable cost. On
the other hand, the structure should be rich enough to
Technology
allow for interesting KDD operations.
In this paper, we propose a rather simple, data structure,
which is relatively easy to extract from texts. As Hardware

described below, this data structure enables interesting


KDD operations. Our main goal is to study text Storage
collections by viewing and analyzing various concept Devices
Computers

distributions. Using concept distributions enables us to


identify distributions that highly deviate from the average Super Main
Workstati ons
Desktop Laptops/
Computers Frames Computers Notebooks
distribution (of some class of objects) or that are highly
skewed (when expecting a uniform distribution). After Figure 1 - Concept Hierarchy for technological concepts
identifying the limits of using this data structure it will be
possible to extract further types of data from the text, Tagging the text with concepts
enhance the KDD algorithms to exploit the new types of
Each article is tagged by a set of concepts that
data and examine their overall contribution to the
correspond to its content (e.g. {IBM, product
KDD goals.
announcement, Power PC}, {Motorola, patent, cellular
The Concept Hierarchy phone}). Tagging an article with a concept entails
implicitly its tagging with all the ancestors of the
The concept hierarchy is the central data structure in concept in the hierarchy. It is therefore desired that an
our architecture. The concept hierarchy is a directed article will be tagged with the lowest concepts possible.
acyclic graph (DAG) of concepts where each of the In the current version of the system these concept sets
concepts is identified by a unique name. An arc from provide the only information extracted from an article,
concept A to B denotes that A is a more general concept each set denoting the joint occurrence of its members in
than B (i.e., communication → wireless the article.
communication → cellular phone, company
→ IBM, activity → product announcement). A For the KDD purposes, it does not matter which method
portion of the “technology” subtree in the concept is used for tagging. As was explained earlier, it is very
hierarchy is shown in Figure 1 (the edges point realistic to assume automatic tagging by some text
downward). categorization method. On the other hand, tagging may
The hierarchy contains only concepts that are of interest be semi-automatic or manual, as common for many text
to the user. Its structure defines the generalizations collections for which keywords or category labels are
and partitioning that the user wants to make when assigned by hand (like Reuters, ClariNet and Individual).
summarizing and analyzing the data. For example, the
arc wireless communication → cellular KDD over concept distributions
phone denotes that at a certain level of generalization,
the user wants to aggregate the data about cellular phones Concept Distributions
with the data about all other daughters of the concept
The KDD mechanism summarizes and analyzes the
“wireless communication”. Also, when analyzing the
content of the concept sets that annotate the articles of the
distribution of data within the concept “wireless
database. The basic notion for describing this content is
communication”, one of the categories by which the data
the distribution of daughter concepts relative to their
will be partitioned is “cellular phones”. Currently, the
siblings (or more generally, the distribution of
concept hierarchy is constructed manually by the user. As
descendants of a node relative to other descendants of that
future research, we plan to investigate the use of
node). Formally, we set a concept node C in the
document clustering and term clustering methods
hierarchy to specify a discrete random variable whose
(Cutting et al, 1993; Pereira et al. 1993) to support the
possible values are denoted by its daughters (from now on
user in constructing a concept hierarchy that is suitable
we relate to daughters for simplicity, but the definitions
for texts of a given domain.
can be applied for any combination of levels of
descendants). We denote the distribution of the random
variable by P(C=c), where c ranges over the daughters of
C. The event C=c corresponds to the annotation of a
document with the concept c. P(C=ci ) is the proportion of
documents annotated with ci among all documents
annotated with any daughter of C.
For example, the occurrences of the daughters of the distribution defined by the data to a model distribution.
concept C=“computers” in the text corpus may be We chose to use the relative entropy measure (or
distributed as follows: P(C=“mainframes”)=0.1; Kullback-Leibler (KL) distance), defined in information
P(C=“work-stations”) = 0.4; P(C=“PCs”)=0.5. theory, though we plan to investigate other measures
We may also be interested in the joint distribution of as well. The KL-distance seems to be an appropriate
several concept nodes. For example, the joint distribution measure for our purpose since it measures the
of C1=company and C2=“computers” may be as follows amount of information we lose if we model a given
(figures are consistent with those of the previous distribution p by another distribution q. Denoting the
example): P(C1=IBM,C2=mainframe)=0.07; distribution of the data by p and the model distribution
P(C1=Digital,C2=mainframe)=0.03; by q, the distance from p(x) to q(x) measures the
P(C1=IBM,C2=work-stations)=0.2; P(C1=Digital, amount of “surprise” in seeing p while expecting q.
C2=work-stations)=0.2; P(C1=IBM,C2=PCs)=0.4; Formally, the relative entropy between two probability
P(C1=Digital,C2=PCs)=0.1. A data point of this distributions p(x) and q(x) is defined as:
p( x ) p( x )
D( p|| q) = ∑ p( x ) log
distribution is a joint occurrence of daughters of the two
concepts company and “computers”. = E p
log
x q( x ) q( x )
The daughter distribution of a concept may be
conditioned on some other concept(s), which is regarded
as a conditioning event. For example, we may be The relative entropy is always non-negative and is 0 if
interested in the daughter distribution of C=“computers” and only if p=q.
in articles which discuss announcements of new products. According to this view, interesting distributions will be
This distribution is denoted as P(C=c | announcement), those with a large distance to the model distribution.
where announcement is the conditioning concept. Interesting data points will be those that make a big
P(C=mainframes | announcement), for example, denotes contribution to this distance, in one or several
the proportion of documents annotated with both distributions. Below we identify three types of model
mainframes and announcement among all documents distributions, with which it is interesting to compare a
annotated with both announcement and any daughter of given distribution of the data.
“computers”1 .
Model Distributions
Concept distributions provide the user with a powerful
way for browsing the data and for summarizing it. One
The Uniform Distribution
form of queries in the system simply presents
distributions and data points in the hierarchy. As is Comparing with the uniform distribution tells us how
common in data analysis and summarization, a much a given distribution is “sharp”, or heavily
distribution can be presented either as a table or as a concentrated on only few of the values it can take. For
graphical chart (bar, pie or radar). In addition, the example, regard a distribution of the form P(C=c | xi ),
concept distributions serve to identify interesting where C=company and xi is a specific product (a
patterns in the data. Browsing and identification of daughter of the concept product). Distributions of this
interesting patterns would typically be combined in the form will have a large distance from the uniform
same session, as the user specifies which portions of the distribution for products xi that are mentioned in the texts
concept hierarchy she wishes to explore. only in connection with very few companies (e.g.,
products that are manufactured by only few companies).
Comparing Distributions Using the uniform distribution as a model means that we
establish our expectation only on the structure of the
The purpose of KDD is to present “interesting”
concept hierarchy, without relying on any findings in the
information to the user. We suggest to quantify the
data. In this case, there is no reason to expect different
degree of “interest” of some data by comparing it to a
probabilities for different siblings (a uniformative prior).
given, or an “expected”, model. Usually, interesting
Notice that measuring the KL-distance to the uniform
data would be data that deviates significantly from
distribution is equivalent to measuring the entropy of the
the expected model. In some cases, the user may be
given distribution, since D(p||u)= log(N) - H(p), where u
interested in data that highly agrees with the model.
is the uniform distribution, N is the number of possible
In our case, we use concept distributions to describe the
values in the (discrete) distribution, and H is the entropy
data. We therefore need a measure for comparing the
function. Looking at D(p||u) makes it clear why using
1 entropy to measure the “interestingness”, or the
A similar use of conditional distributions appears in the
EXPLORA system (Klosgen 1993). Our conditioned
“informativeness” of the given distribution is a special
variables and conditioning events are analogous to Klosgen's case of the general framework, where the expected model
dependent and independent varibales. is the uniform distribution.
Sibling Distribution distributions that have a high KL-distance to the expected
Consider a conditional distribution of the form P(C=c | model, as defined by one of the three methods above.
xi ), where xi is a conditioning concept. In many cases, it is Second, when focusing on a specific distribution, we can
natural to expect that this distribution would be similar to identify interesting patterns by focusing on those
other distributions of this form, in which the conditioning components that mostly affect the KL-distance to the
event is a sibling of xi . For example, for C=activity, and expected model. For example, when focusing on the
xi =Ford, we could expect a distribution that is quite distribution P(C=activity | Ford), we can discover which
similar to such distributions where the conditioning activities are mentioned most frequently with Ford
concept is another car manufacturer. (deviation from the uniform distribution), in which
To capture this reasoning, we use Avg P(C=c | x), the activities Ford is most different than an “average” car
average sibling distribution, as a model for P(C=c | xi ), manufacturer (deviation from the average sibling
where x ranges over all siblings of xi (including xi itself). distribution), and which activities has mostly changed
In the above example, we would measure the distance their proportion over time within the overall activity
from the distribution P(C=activity | Ford) to the average profile of Ford (deviation from past distribution).
distribution Avg P(C=activity | x), where x ranges over all A major issue for future research is to develop efficient
car manufacturers. The distance between these two algorithms that would search the concept hierarchy for
distributions would be large if the activity profile of Ford interesting patterns of the two types above. In our current
differs a lot from the average profile of other car implementation we use exhaustive search, which is made
manufacturers. feasible by letting the user specify each time which nodes
In some cases, the user may be interested in comparing in the hierarchy are of interest (see examples below). It is
two distributions which are conditioned by two specific our impression that this mode of operation is useful and
siblings (e.g. Ford and General Motors). In this case, the feasible, since in many cases the user can, and would
distance between the distributions indicates how much actually like to, provide guidance on areas of current
these two siblings have similar profiles, with regard to the interest. Naturally, better search capabilities would
conditioned class C (e.g. companies that are similar in further improve the system.
their activity profile). Such distances can also be used to
Implementation and Results
cluster siblings, forming subsets of siblings that are
similar to each other2 . In order to test our framework, we have implemented a
prototype of KDT in LPA Prolog for Windows. The
Past Distributions (trend analysis)
prototype provides the user a convenient way for finding
One of the most important tools for an analyst is the interesting patterns in the Text Corpora. The Corpora we
ability to follow trends in the activities of companies in used for this paper is the Reuters-22173 text
the various domains. For example, such a trend analysis categorization test collection. The documents in the
tool should be able to compare the activities that a Reuters-22173 collection appeared on the Reuters
company did in certain domain in the past with the newswire in 1987. The 22173 documents were assembled
activities it is doing in those domains currently. An and indexed with categories by personnel from Reuters
example conclusion from such analysis can be that a Ltd. and Carnegie Group, Inc. in 1987. Further
company is shifting interests and rather than formatting and data file production was done in 1991 and
concentrating in one domain it is moving to another 1992 by David D. Lewis and Peter Shoemaker.
domain.
Finding trends is achieved by using a distribution which The documents were tagged by the Reuters personnel
is constructed from old data as the expected model for the with 135 categories from the Economics domain. Our
same distribution when constructed from new data. Then, prototype system converted the document tag files into a
trends can be discovered by searching for significant set of prolog facts. Each document is represented as
deviations from the expected model. prolog fact which includes all the tags related to the
document. There are 5 types of tags: countries, topics,
Finding Interesting Patterns people, organizations and stock exchanges. The user can
investigate the prolog database using this framework. The
Interesting patterns can be identified at two levels. First,
examples in this paper are related to the country and topic
we can identify interesting patterns by finding
tags of the articles (which are the largest tag groups),
although we have found interesting patterns in the other
2
Notice that the KL-distance is an asymmetric measure. If tag groups as well.
desired, a symmetric measure can be obtained by the
Typically the user would start a session with the prototype
summing the two distances in both directions, that is,
D(p||q)+D(q||p). by either loading a class hierarchy from a file or by
building a new hierarchy based on the collection of tags • Both Brazil and Mexico (not shown) have a large
of all articles. The following classes are a sample of proportion of articles that talk about loans.
classes that were built out the collection of countries
mentioned in the articles: South America, Western Table 3 - Comparing Topic Distributions of Brazil, and
Europe and Eastern Europe. In the next phase we Columbia to Avg P(Topic = t | South America)
compared the average topic distribution of countries in Topic Relative % (#) in Avg. % (#) in
South America to the average topic distribution of Entropy Brazil S.A.
countries in Western Europe. In the terms of the previous ship 0.065 7.4 (27) 1.0 (32)
Section, we compared for all topics t the expression Avg loan 0.063 29.6 (108) 18.2 (223)
earn 0.057 5.5 (20) 0.5 (22)
P(Topic = t | c) where c ranges over all countries in
coffee -0.029 12.9 (47) 21.6 (91)
South America to the same expression where c ranges orange 0.025 2.2 (8) 0.2 (8)
over all countries in Western Europe. In the next tables Topic Relative % (#) in Avg. % (#) in
we see the topics for which we got the largest KL- Entropy Colum S.A.
Distance between the suitable averages over the 2 classes. coffee 0.259 59.2 (29) 21.6 (91)
loan -0.029 6.1 (3) 18.2 (223)
In Table 1 we see topics which have a much larger share
crude 0.014 16.3 (7) 13.3 (66)
in South America than in Western Europe. In Table 2 we cpi 0.013 4.1 (1) 2.0 (14)
see the topics which have a much larger share in Western trade 0.006 4.1 (1) 2.9 (26)
Europe than in South America.
Table 1 - Comparing South America to Western Europe In Table 4 we see the results of a similar analysis that was
done from the opposite point of view. In this case we built
Topic Rel Entropy % / # in S.A % / # in W.E
a class of all agriculture related topics and computed the
coffee 0.414 21.6 / 201 0.3 / 37
distribution of each individual topic and compared it to
loan 0.160 18.2 / 169 2.4 / 102
the average distribution of topics in the class. We picked
crude 0.055 13.3 / 124 5.1 / 86
2 of the topics that got the highest relative entropy and
coppe 0.023 2. 8 / 26 0.4 / 12 listed the countries that that mostly affected the KL-
rsilver 0.017 1.4 / 12 0.1 / 7 distance to the average country distribution.
Table 2 - Comparing Western Europe to South America
Table 4 - Comparing Country Distributions of cocoa, and
Topic Rel Entropy % / # in W.E % / # in S.A coffee to Avg P(Country = c | Agriculture)
acq 0.119 9.5 / 373 0.5 / 9
Country Relative % (#) Avg. % (#)
cbond 0.067 5.8 / 230 0.4 / 6 Entropy in cocoa in Agr.
earn 0.052 5.2 / 204 0.5 / 22 uk 0.207 32.1 (34) 7.2 (252)
corp_news 0.035 1.8 / 71 0.05 / 1 ghana 0.114 9.4 (10) 0.6 (16)
ivory coast 0.098 8.5 (9) 0.6 (16)
money_fx 0.031 4.9 / 191 1.1 / 13
usa -0.049 8.5 (9) 32.5 (1301)
interest 0.029 2.6 / 101 0.2 / 4 Country Relative % (#) in Avg. % (#) in
Entropy coffee Agr.
We can see that (according to this text collection)
brazil 0.178 23.0 (47) 3.9 (132)
countries South America have much larger portion of colombia 0.171 14.2 (29) 0.9 (42)
agriculture and rare metals topics, while Western Europe usa -0.051 10.3 (21) 32.5 (1301)
countries have a much larger portion of financial topics. uganda 0.038 2.9 (6) 0.2 (6)
In the next phase, we went into a deeper analysis of
comparing the individual topic distribution of the Finding Elements with Small Entropy
countries in South America to the average topic Another KDD tool is aimed at finding elements in the
distribution of all countries in South America. In Table 3 database that have relatively low entropy, i.e., elements
we see the topics in which the country topic distribution that have “sharp” distributions (a “sharp” distribution is a
deviated considerably from the average distribution (i.e., distribution that is heavily concentrated on a small
the topics that mostly affected the KL-distance to the fraction of the values it can take).
average distribution). From the table we can infer the
following information: When the system computed the entropy of the topic
distribution of all countries in the database we found that
• Columbia puts much larger emphasis on coffee than Iran (according to text collection used) that appears in
any other country in South America. (it is interesting 141 articles has an entropy of 0.508, where 69 of the
to note that Brazil that has 47 articles about coffee, articles are about crude, 59 are about ship, the other 13
more than any other country, is below the class times in which Iran appears belong to 13 different topics.
average for Coffee). Another country which has relatively low topic is
Columbia. In this case 75.5% of the topics in which Archaeology. International Journal of Intelligent and
Columbia is mentioned are crude (59.2%) and Cooperative Information Systems.
coffee(16.3%).
Cutting C., Karger D. and Pedersen J., 1993. Constant
When the system computed the entropy of the country interaction-time Scatter/Gather browsing of very large
distribution of all topics we notice that the topic “earn” document collections. In Proceedings of ACM-SIGIR
has very high concentration in 6 countries. More than Conference on Information Retrieval.
95% of the articles that talk about earning involve the
countries USA, Canada, UK, West Germany, Japan and Lewis D., 1992. An evaluation of phrasal and clustered
Australia. The other 5% are distributed among another 31 representations on a text categorization problem. In
countries. Proceedings of ACM-SIGIR Conference on Information
Retrieval.
Summary
Feldman R., 1994. Knowledge Discovery in Textual
We have presented a new framework for knowledge Databases. Technical Report. Bar-Ilan University, Ramat-
discovery in texts. This framework is based on three Gan, Israel.
components: The definition of a concept hierarchy, the
categorization of texts by concepts from the hierarchy, Frawley W.J., Piatetsky-Shapiro G., and Matheus C.J.,
and the comparison of concept distributions to find 1991. Knowledge Discovery in Databases: An Overview.
"unexpected" patterns. We conjecture that our uniform In knowledge Discovery in Databases eds. G. Piatetsky-
and compact model can become useful for KDD in Shapiro and W. Frawley, 1-27. Cambridge, MA: MIT
structured databases as well. Currently, we are Press.
performing research in text categorization which has
some similarity to that of (Hebrail and Marsais, 1992), Hebrail G., and Marsais J. Experiments of Texual Data
which is geared to make the KDT system more feasible Analysis at Electricite de France. In Proceedings of IFCS-
and accurate. In addition, we are building another layer to 92 of the International Federation of Classification
the system that will provide the user with textual Societies.
conclusions based on the distribution analysis it is
performing. We plan to use the KDT system for filtering Jacobs P., 1992. Joining statistics with NLP for text
and summarizing new articles. We conjecture that the categorization. In Proceedings of the 3rd Conference on
concept distributions of articles marked as interesting by Applied Natural Language Processing.
the user can be used for updating the user’s personal news
profile and for suggesting subscribing to news groups of Klosgen W., 1992. Problems for Knowledge Discovery in
similar characteristics. Databases and Their Treatment in the Statistics
Interpreter EXPLORA. International Journal for
Acknowledgments
Intelligent Systems vol. 7(7), 649-673.
The authors would like to thank Haym Hirsh and the
anonymous reviewers for helpful comments. Ronen Lewis D. and Gale W., 1994. Training text classifiers by
Feldman is supported by an Eshkol Fellowship. uncertainty sampling. In Proceedings of ACM-SIGIR
Conference on Information Retrieval.
References
Mertzbacher M. and Chu W., 1993. Pattern-Based
Anand T. and Kahn G., 1993. Opportunity Explorer: Clustering for Databases Attribute Values. In Proceedings
Navigating Large Databases Using Knowledge Discovery of the 1993 workshop on Knowledge Discovery in
Templates. In Proceedings of the 1993 workshop on Databases.
Knowledge Discovery in Databases.

Apte, C., Damerau F. and Weiss S., 1994. Towards


language independent automated learning of text
categorization models. In Proceedings of ACM-SIGIR
Conference on Information Retrieval.

Brachman R., Selfridge P., Terveen L., Altman B.,


Borgida A., Halper F., Kirk T., Lazar A., McGuinness
D., and Resnick L., 1993. Integrated Support for Data

View publication stats

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy