English For Computer Science
English For Computer Science
A: If you are an engineer, you have to write reports that other people can
understand.
There is something that I have noticed about students studying for Computer
Science tests, especially at the high school level – it seems that many study
to pass the test, but not to understand the concepts involved. It’s a
ridiculous scene – watching students pace down the hallway before an exam,
reciting bubble sort (and if they make it to University, linked list patterns) as
poetry. And while I would also argue that computer programming has
certain poetic features to it, such static regurgitation of a memorized fact
serves little practical value.
It seems that technical content takes preference over creativity and logic –
pillar fundamentals of computer science. When students concentrate on
memorizing oddities of specific syntax structure, the larger picture of a
concept behind the coded expression is being lost. The situation arises from
students’ lack of complete understanding of the complex and strict
languages, frustration of getting things to work “just right”, and is re-
enforced by cookie-cutter technical assignments and tests. Some simply give
up, memorize the pattern that they are told will be on the test, and… pass.
Decent grades paint a very misleading picture of classroom’s achievements.
University assignments that ask to write a “program” via filling out the
function stubs provided by TAs carry as much utility as spelling tests we
used to have in the elementary school. It’s time for computer science to
grow up.
discuse
I think that the main reason that people are able to perform in the free-
er environment of english assignments is not because of a fundamental
difference in the subject matter. I think that it is instead caused by the
student’s exposure to the subject matter up to that point.
By the time some people get to their first class CompSci class, they might
never have written code before. Compare this to english class where you
only get to write essays after you have been exposed to the material for
years. It takes you 10 years (or so I can’t be bothered actually figuring it out)
of speaking english before you just get to the point of being able to begin
writing complicated, well-reasoned essays.
Computer Science might seem like it is in it’s early childhood, but that’s
because by the time you’ve finished College you’ve learned the alphabet and
you can use three-syllable words without your brain melting (to juxtapose
the two fields). You are by no means William Shakespeare yet.
When we think of the impact that computers and the internet have had on
the use of the English language, it often brings to mind the nightmares borne
of the scummy posts between bulletin-boarders in the 1980's, when a 2800
baud modem was considered hotrod, and hacking could be done with a
homebuilt box of components, a couple of screwdrivers, and a payphone. It
brings to mind the controversy around the effect spellcheckers have had on
the thought processes that allow us to actually understand how to spell
words, and makes us really question how much these modern conveniences
of thought and communication have contributed to the apparent degradation
of grammar and spelling in type, a medium that was previously considered
the arena of professionals well versed in the intricate workings of the
English language but which has since become less exclusive.
But in those pioneering days of the silicon microchip, the internet was a
more tangible thing, a fiberoptic spiderweb flowing from telephone to
telephone that directly connected people to one another, no matter the
distance, instead of a network depending upon the service providers of today
to route information through some globally accessible string of computer
hardware shared by everyone. There was a sort of odd, stifling privacy in
those days, but also a cave-like darkness that had yet to be explored, with
individuals shuffling slowly toward one another, their words little more than
luminescent green text appearing across shaky, grey-black screens, and the
name of the shadowy, almost legendary monster hiding in their midst on the
lips of everyone, cyber-savvy or not: "Hacker." He was the man who had all
the codes, all those secret ten-digit phone numbers that would give him
access to everything from the credit card accounts of the recently dead to the
power bills of friends and relatives and everything in between.
Those were the days that first left us questioning the effect that computers
(and later the internet) have had and will have on the common man's use and
understanding of the English language, and the question hasn't really gone
away. But then, consider a typical 90's sentence roaring through cyberspace
with all the blunt elegance of "Sup d00dz!1! 1337 lol! I juz haxx0rd teh
p0wer c0! lolz0rz!" or the more contemporary (and grammatically speaking,
far more stable) equivalent: "Heyo lol i totaly just like downloded taht new
harbl movie lol." With language like that (and worse) showing up in print on
public domains, people start to wonder where society went wrong- is it
unreasonable to expect at least a fifth grade understanding of the English
language from the hordes of people with access to a computer and the
internet, some of whom actually hold college degrees?
But perhaps things aren't all that bad. Let's take a look at the other side of
things- consider the fact that computers are now a recognized and accepted
part of education in modern day, first-world countries. We tend to overlook
the fact that college requires the use of a computer for research and the
creation of essays and projects, or that some universities require the students
to bring laptop computers to class, but nothing is taken so much for granted
as the fact that, being almost wholly text-based, computers (and the larger
body of the internet) absolutely require reading skills to operate and
navigate!
Beyond that, consider all that the denizens of the internet have done to
promote literacy- Online writing workshops, guilds, and even reading
assistance pages litter nearly every corner of the world wide web, and many
famous works of literature are freely available online if you know where to
look, from Mary Shelley'sFrankenstien and Joseph Conrad'sHeart of
Darkness to the works of William Gibson or Albert Einstein and Niccolo
Machiavelli. Publishing companies, from the large, universally accepted and
ultimately exclusive kind to the smaller, homebrewed variety that cling to
life and desperately seek out new writers to boost their notoriety work hard
to saturate the web with the written word. Intelligent and intellectually deep
material shows up regularly in long diatribes and short, meandering strings
on hundreds of thousands of websites across the globe every day, and E-
zines and E-books are everywhere, ranging from the fine and professional to
the grammatically lacking and pointless sores of putrid and festering quasi-
english that infest the internet like a literary plague. Online references, like
the easily remembered "Dictionary.com" and its sister sites, or
encyclopedias like Wikipedia make information about almost anything both
easily accessible and usually presented in terms that are easy to understand
without having to sacrifice much, if any, critical detail or feeling.
So where's the problem then? Well, perhaps it's in the way we're conditioned
to think. Consider that perhaps one of the most frustrating things to the
reading adult in our modern society is abundances of obvious flow and
grammatical errors in the things we read. We're so used to everything
written professionally being typed, and everything typed being
professionally written, so our minds have become accustomed to this notion,
and errors throw the average reader for a loop. Logically speaking, it makes
sense, right? We pick up our favorite newspapers, magazines, novels and, if
we're looking, we'll see probably two or three errors in every hundred pages
or so (that's roughly 1 in 17000 words for market paperbacks.) But if we leaf
through the myriad posts of an online blog or a discussion forum, the more
reserved and "grammar-conscious" of us find ourselves balking at the
frequency of minor errors that could be eliminated by a quick once-over
glance at the material in question before it's sent or posted.
Perhaps this viewpoint also stems from the fact that typed text typically
requires a machine to be properly done, (whether it be a typewriter or a
computer,) because, in a way, machines are seen as being superior to the
fleshy elegance of humanity. After all, we don't expect perfection of spelling
or grammar in the mad scribbles of a physician, the love-notes we find
wedged in the strangest (and yet always most familiar) places by our own
special someone, or the handwritten letters we get from highschool students
or children studying abroad. Not convinced? Consider any science fiction
film or television show- the machine is always far more capable, advanced,
and less likely to make mistakes than the actors. As a society, we've grown
to accept this odd fact, we've almost been conditioned by various forms of
media to believe that machines have been programmed not exactly to be
perfect, but to be somewhat above the human mind in their analytical
capacity. To err is human, as the saying goes, but we expect better from the
clearly legible etches left on the amorphous walls of cyberspace by the
millions upon millions of individuals that go tracking through its depths
every day. After all, how hard is it to use a spellchecker, right?
So while the tainted waters of the internet continue to pump the usual mix of
pristine, clear knowledge and raw sewage into our computers, the debate
rages on amidst it all. Will computers and the internet be the grammar and
spelling's saving grace, or will they ultimately prove to be the shadowy
specters caught holding the axe when the written word falls before the
relentless drumbeat of human progress? It all comes down to how we use the
technology at our disposal, and the example we set for future generations.
With great responsibility comes great power; our choices today shape the
realities of tomorrow, and with a little work and perhaps a little luck, we'll
see the dawn of a new age of literacy and clarity of thought, and an
intellectual reawakening unlike anything the world has seen before.
wiki
n many languages, Greek and Latin roots constitute an important part of the
scientific vocabulary. This is especially true for the terms referring to fields
of science. For example, the equivalent words
formathematics, physics, chemistry, geology, and genealogy are roughly the
same in many languages. As for computer science, numerous words in many
languages are from American English, and the vocabulary can evolve very
quickly. An exception to this trend is the word referring to computer science
itself, which in many European languages is roughly the same as the
English informatics: German: Informatik; French: informatique; Spanish,
Italian, and Portuguese: informática; Polish:informatyka.[citation needed]
internet
Table of Contents
Preface
The current situation
Why is it so?
Effects of the importance of the Internet and English
An official language for the Internet?
But can things change?
Is English a suitable universal language?
A constructed international language?
An alternative: machine translation
Final remarks
Preface
The impulse to writing this article was a discussion in the
newsgroup sci.lang. The original question was "whether or not English
should be made the universal language of the internet".
Why is it so?
Generally speaking, when a languages has got the position of a universal
language, the position tends to be affirmed and extended by itself. Since
"everyone" knows and uses English, people are almost forced to learn
English and use it, and learn it better.
Even if you expect the majority of your readers to understand your native
language, you may be tempted to use English when writing e.g. about
research work. Usually researchers all over the world know English and use
it a lot, and often the relevant terminology is more stable and well-known in
English than in your own language. Thus, to maximize the number of
interested people that can understand your text, you often select English
even if the great majority of your readers have the same native language as
you. Alternatively, you might write your texts both in your native language
and in English, but this doubles the work needed for writing your document
and possibly maintaining it. The maintenance problem is especially
important for documents on the World Wide Web - the information system
where one crucial feature is the ability to keep things really up to date.
Consequently, the use of English in essentially national contexts tends to
grow.
By the way, when people post articles to international groups in their own
languages, the reason is typically novice users' ignorance of basic facts about
the news system. People start posting articles before they have read what is
generally written to the group. One thing that causes this happen relatively
often that there is no easily accessible and useable list of groups together
with their content descriptions, and typically content descriptions do not
explicitely state what language(s) should be used in the group.
The universal language position, once gained, tends to be strong. But how is
such a position gained?
During the history of mankind, there have been several more or less
universal languages or lingua francas, such as Latin (and Greek) in the
Roman empire, mediaeval Latin in Western Europe, later French and
English. Universality is of course relative; it means universality in the
"known world" or "civilized world", or just in a large empire. No language
has been really universal (global), but the current position of English comes
closest. The position of a universal language has always been gained as a by-
product of some sort of imperialism: a nation has conquered a large area and
more or less assimilated it into its own culture, including language, thus
forming an empire. Usually the language of the conquerer has become the
language of the state and the upper class first, then possibly spread over the
society, sometimes almost wiping out the original languages of the
conquered areas. Sometimes - especially in the Middle Ages - the
imperialism has had a definite cultural and religious nature which may have
been more important than brute military and economic force.
In different countries and cultures, English has different positions. There are
countries where English is the native language of the majority, there are
countries where English is a widely known second language, and there are
countries where English has no special position. These differences add to the
above-mentioned polarization. Specifically, it is difficult for people in
previous colonies of other countries than Great Britain (e.g. France, Spain,
the Netherlands) to adapt to the necessity of learning English. Locally, it
may be necessary to learn the language of the previous colonial power since
it is often an official language and the common language of educated people;
globally, English is necessary for living on the Internet. And the more
languages you have to learn well, the less time and energy you will have for
learning other things.
English can lose its position as a widely used (although not official)
universal language in two ways. Either a new empire emerges and its
language becomes universal, or a constructed language becomes very
popular. I believe most people regard both of these alternatives as extremely
improbable, if not impossible. Perhaps they are right, perhaps not.
I can see two possible empires to emerge: the European Union and a yet
nonexistent Japanese-Chinese empire.
The very idea is not inherently unrealistic, but it can only be realized if
strong economical and political interests are involved, such as the intended
creation of a European or Japanese-Chinese empire. The best that the
advocates of a constructed international language can wish is that such
empires emerge and that the United States remain as an important power, so
that the world will have a few strong empires which cannot beat each other
but must live in parallel and in cooperation. In such a situation, it might turn
out that it is unrealistic not to agree on a common language which is not any
of the national languages.
During the last few decades, quite a lot of predictions and even promises
have been presented regarding machine translation, but useful software and
systems for it have not been available until recently. This has caused
disappointments and pessimism to the extent that many people consider
machine translation as definitely unrealistic.
Final remarks
Machine translation and constructed international languages are alternative
but not mutually exclusive solutions to the problem of communication
between people with different native languages. They can be combined in
several ways.
This paper will mainly discuss the effect of computer technology on the
design of artificial languages. There are, of course, many other aspects,
which will be discussed only briefly.
Definability
A language suitable for automatic processing should be defined formally as
far as possible. A formal description of the syntax as well as would make it
easier to write software for both analyzing and generating the language. An
official dictionary should use a rigorously defined formalism to indicate
properties of words such as classes of words and transitiveness of verbs.
Ideally, a subset of the language itself should act as the metalanguage.
Some features of semantics could be defined formally. This applies in
particular to the meanings of derivative suffixes: they should be expressed
using a notation which specifies the meaning using an analytic expression.
Modularity
In compiler technology it is customary to make a clear distinction
between lexical, syntactic, and semantic analysis. This approach should be
applied to the construction of an artificial language:
Alphabet
Texts are produced, transmitted, and recorded using computers to a rapidly
increasing amount. The most common character set is still ASCII, which
contains the English letters (A - Z, a - z) and some punctuation and special
characters. Although wider character sets have been defined and
standardized, general support for them cannot yet be assumed to exist.
Thus, the alphabet for an artificial language should be the English alphabet
or a subset of it, without any diacritic marks.
Phonetics
Automatic generation and recognition of speech are technically possible
now, and future development will make them economically feasible in a
large number of applications soon. The effort needed to generate and
especially to recognize speech strongly depends on the regularity of the
phonetic structure of the language.
Grammatical categories
Let us first consider the need for grammatical categories such as number,
gender, and tenses, without yet considering how they are to be expressed.
Modes of verbs are hardly needed, in general, since the desired meanings
can be expressed using adverbs, different conjunctions for various types of
sentences, etc. Modes like subjunctive are difficult to learn and to use in
languages like French, and they seldom have any useful meaning. The only
exception to "modeless" system of conjugation could be the imperative. The
imperative could be used to denote an impersonal instruction or suggestion,
as opposite to personal commands, wishes etc. which require delicate
distinctions of degrees of politeness and imperativeness, best expressed
using adverbs.
If synthetic methods are introduced, they must be regular: an affix shall not
depend on the base word and shall not affect its form.
There is, however, a very simple approach which solves the essential
problems: use only one way of attaching affixes - and suffixing is the
obvious approach, since it is more productive in current languages. Thus a
word would be morphematically parsed "backwards": simply check if the
word ends with a suffix of the language, remove the suffix and apply the
same test to the rest of the word etc., until no suffix can be found. Then one
can lexically check that the remaining word exists in the basic vocabulary; if
not, the word is assumed to be a "foreign" word (a proper noun). Notice that
the "suffixing only" principle also removes a semantic problem: assuming
that we have a word of the form prefix+base+suffix, is it to be understood as
(prefix+base)+suffix or as prefix+(base+suffix)? (Such issues are often real
problems. For instance, many people misunderstand the word "atheism" as if
it consisted of the negation prefix "a-" and the word "theism"!)
Some very simple and easily definable and recognizable prefixes could be
accepted (e.g. the prefix "non-", denoting simple negation of the rest of the
word).
Word order
Rules for word order can be set up relatively freely. For automatic
generation of sentences, a strict word order would be easier. Languages
with "free" word order use variations in the order to indicate nuances. For
automatic processing, it would be better to express nuances by using
adverbs and by having partial synonyms for important words. Adverbs are
definitely to be preferred, since synonyms cause serious problems in search
operations: it is difficult to search for texts discussing a phenomenon if
there is large set of alternative words for the phenomenon.
From the computational point of view, the most natural word order would be
VSO (verb, subject, object), since that corresponds to the normal syntax of
subprogram calls. Both the subject and the objects (direct and indirect) are
comparable to arguments of a subprogram call, whereas the predicate verb
corresponds to the subprogram name. Adverbials can be regarded as optional
arguments, so they should logically appear after other arguments. Notice that
normal imperative sentences, which are so common in languages for
controlling computers, have the form VSO with the subject omitted.
Word derivation
Word derivation should be extensive to make vocabularies smaller and
generally based on suffixes. Each suffix is defined by its actual phonetic (and
literal) appearance, its role as deverbal or denominal, its class (noun or
verb), and its semantic meaning either as a function of the meaning of the
base word or as "to be defined". The latter option means that there is no
generic predefined meaning; in that case, the meaning of each word
formed using the suffix should be defined separately and listed in a
dictionary. (In natural languages, deminutive suffixes are typically
morphemes which are thought as indicating smallness only, but in fact they
usually belong to the latter category. A cigarette is not a small cigar in
reality, just metaphorically.)
Concluding remarks
This article has outlined an optimal international language starting from the
idea of automatic processability. For further design, it would be necessary
to fix such things as the structure of morphemes and the basis of word
creation. There have been several basic strategies of word creation in
artificial languages. (Somewhat surprisingly, the one that would most
naturally suggest itself in the modern world, using English vocabulary as the
basis, is one of the rarest.)
The Loglan language had several design criteria similar to those presented
here, probably mainly because they were considered useful for purely human
communication as well. It has often been said that Loglan is too logical for
human beings to gain popularity, since normal fluent speech does not apply
logical forms. On the other hand, Loglan has a feature which is definitely
unnatural: its vocabulary is very artificial. It might be a useful experiment to
construct a language with structure similar to that of Loglan but with Latin
or English based vocabulary.
Vocabulary Development
One way to use computers for English Language Learners is to teach
vocabulary. Kang and Dennis (1995) write, "Any attempt to treat
vocabulary learning as learning of isolated facts certainly will not
promote real vocabulary knowledge". Students need to learn
vocabulary in context and with visual clues to help them understand.
Computers can provide this rich, contextual environment. The
computer also allows students to become active learners in a one-on-
one environment. Computers can incorporate various learning
strategies as well as accommodate a variety of learning styles.
Writing
As demonstrated, computers and software can help English language
learners develop vocabulary skills and knowledge. Computers can
also help ELL(English Language Learners) students develop their
writing skills.
Computers
Most software programs are written in the English language. For
those who are seeking to expand their computer knowledge, having
the ability to read and understand the English language can be
invaluable.