Tag Archives: software

Summing it up “intelligently” or simply “copying and pasting” sentences?

Summarizing Software  versus  Human Agent Précis Writing

 To what extend do summaries generated in a natural test environment live-up to product descriptions and how do they compare when they are pitched against summaries written by human agents? And how do the four summarizing products tested compare among themselves? Do different summarizers come up with the same results when fed the same text? So, is it plain low-level algorithm-based “copying and pasting” techniques versus higher order thinking skills in humans? In an analysis running 100 pages, I set out to find out exactly that.

Throughout the tests, no traces of human-like intelligent capabilities have been found in machine-generated summaries

With regard to “intelligent” properties, summarizers do not live up to the promises made in product descriptions. Commercial computer software (summarizers) cannot produce summaries which are comparable to those produced by human agents. Throughout the tests, and not unexpectedly, no trace of human-like intelligent capabilities has been found in machine-generated summaries. The methods summarizing software use are plain low-level algorithm-based “copying and pasting” techniques generating summaries in an automaton-like fashion. They are not the results of a mental process; current summarization software is incapable of generalization, condensation or abstraction. Summarizers extract or copy out or filter out original sentences or fragments in the right sequence but content-wise in an unconnected fashion. Summarizing software cannot distinguish the essential from the inessential; it cannot abstract the essence of original texts and condense them into a restatement (onto a higher conceptual plane). Summarizing software lacks the properties of abstract thinking, of analysis and synthesis. It has no insight; it cannot interpret facts and grasp concepts, let alone the wider overtones of any given text e.g. deliberately used humour, sarcasm and irony or biased tendencies. It cannot order and group facts and ideas, nor can it compare and contrast them or infer causes. Neither is it capable of condensing text into pithy restatements nor can it reproduce text into paraphrased abridgments, nor can it recast sentences at the most elementary level for the time being.

Detailed analysis

Direct comparisons of human brain functions used in précis-writing and summarizing softwares´ algorithms get short shrift in academic papers, at least in those available on this subject. Readers interested in this subject, yet unaccustomed to reading off-the-beaten track topics, may find this text interesting.

Reporting structures are the most frequent structures used in any language, yet little emphasis is placed on this fact in education and (foreign) language training as any textbook analysis will reveal. “Summarization is one of the most common acts of language behaviour. If we are asked to say what happened in a meeting, what someone has told us about another person or about an event, what a television programme was about, or what the latest news is from the Middle East, we are being asked or invited to express in condensed form the basic parts of an earlier spoken or written text or discourse.”1 Often, such summaries are shortened or constricted or abstracted onto another level. But they are very often the same verbs, verbal phrases etc. as used in reported speech.

In this test series, I compared summaries produced or created by human agents – also called abstracts, synopses, or précis – against extracts generated or copied out by various summarizing software or programs (software agents, also called computational agents). All human agent sample summaries have been taken from Cambridge Proficiency Examination practice books (UK English). My point of departure was the hype by software companies who all too frequently endow their summarizing software with human qualities, bordering on “personification”. Most of the product descriptions and reviews would have us believe that their computing power is fully comparable with human brain power. We are promised that these programs can determine what the text is about, extract the gist of any text, pinpoint the key concepts, and reduce long texts to their essentials. What is more, we are made to believe that they can analyze a text in natural language, “taking into account its structure and semantic relationships” and even get an in-depth understanding of the underlying idea.

I have taken up the challenge posed by said overblown statements, which represent summarizers often as “intelligent”, and pursued the question if the various commercial summarizing software programs available are mere number crunchers based on algorithms simply extract or copy out sentences and fragments, or whether they possess some kind of artificial intelligence akin to that of humans. This important difference between abstract thinking in human agents and automaton-like properties of summarizing software is looked into in some detail and supported by stringent test results confirming the superiority of the human brain over the unthinking, machinelike properties of summarizing software.

Is software “intelligent”? – And are human agents “truly” intelligent?

In academic papers, product descriptions for commercial summarization software, and generally in the field of AI, the term “intelligent agent” is frequently used in connection with software or software components. The degree to which present-day computer software, and summarisation software in particular, is “truly intelligent” is seldom a principal object of investigation, be that the lack of supposed relevance or the unavailability of investigative papers readily available to the public. Summarizing software being artificial intelligence (AI) software is said to be capable of generating complete summaries (extracts) which are sometimes misleadingly called précis, synopsis or abstracts, all terms which, rather, describe human-agent- produced summaries. In this analysis I have addressed the issue of the often ambiguous hype surrounding summarizing software. All too frequently, it is openly or implicitly invested with a human-like intelligence.

 By gauging its performance in tests in which summarization software competes directly with human-agent-produced summaries taken from textbooks preparing for the CPE (Cambridge Proficiency Examination), I have explored its computational competence and the current state of supposed “intellectual” quality of results generated, or lack of both. Most present-day commercial summarizers and those tested in this analysis use the method based on extraction with summarizers copying out (copying and pasting) key sentences from the original text. In contrast, the abstractive summarization method is based on natural language processing, meaning the software needs to “understand” and “interpret” truly the original text and reproduce its most important information it in its own “words” in an abridged form. Present-day commercially available summarizing software using the abstraction method cannot do this satisfactorily and if there are any non-commercial summarizers in operation, they are difficult to check on.

Users hardly ever get complete, connected and readable summaries

Product descriptions assure potential buyers that summarizers can determine what a text is about, pinpoint core messages and key concepts and thus reduce long texts to their essentials. One software company even wants to make users believe that their summarizer can analyze text in natural language, “taking into account its structure and semantic relationships” and even get “an in-depth understanding of the underlying idea”. Furthermore, readers are promised that they can spend considerably less time understanding the general meaning of any document or website by just reading machine-extracted summaries without missing key information. However, the tests have shown that summarization softwares` machine-reading-comprehension properties lack accuracy since users hardly ever get complete, connected and readable summaries.

None of the summarizers generated reliably consistent, complete and impeccable extracts to be used as first-stage drafts for human agent editing

According to software companies, summarizers are mainly used as a time-saving reading aid, a kind of complete executive summary which supposedly allows the reader to spend considerably less time understanding the general meaning of documents, getting familiar with their structure and reading without missing key information. In order to meet the highest of standards, they would have to deliver consistent results and generate faultless and complete extracts. However, as it is often the case, theory does not square with practice at all since the tests show that the summarizing software tested is incapable of generating acceptable summaries due to a number of shortcomings outlined below. Neither was any of the summarizers tested able to distinguish itself from the others in any conspicuous way, save the number of irrelevant ideas generated, nor did any summarizer generate reliably consistent, complete and impeccable extracts which could be used as first-stage drafts for human agent editing.

 Almost always, summarizers will extract the first sentence or the first two sentences because they have been programmed to do so since these are deemed to be lexically loaded and contain the essence of the text. If the first sentence contains conflicting, subordinate ideas or anecdotal content, it can make the summary less useful if not downright wrong with the summarizer extracting the negligible first sentence(s) at the expense of more salient ideas which may then not be extracted because of settings limiting the choice. This matter is aggravated when the first sentence is long and contains trivial subordinate “ideas”. Summarizers cannot recognise these irrelevant parts and do not leave them out as human agents would. The latter holds true for all compound sentences extracted.

Extracted or filtered out sentences lack cohesion and resemble bitty bits scattered across the pages

Filtered out or extracted sentences lack unity; they are disjointed and scattered in list form across the page or highlighted in colour in similar fashion. Most of them have the appearance of brute-force-copied-and-pasted text fragments with the summaries generated by Microsoft Word’s summarizing function being the only exception. The latter constricts the sentences selected into impressive-looking paragraphs, thus making it seem that a lexical interrelationship between the “key” sentences selected is preserved. However, a more refined analysis shows that this may only be partially true. The fact that contextually unconnected sentences are placed one after the other under the false pretence that text cohesion is created or retained does not make a better summary or reading any easier. In one case a “novel” but wrong or misrelated grammatical relations was established which was not present in the original and which substantially changed the meaning of the extract under investigation. It is safe to assume that this is no isolated incidence, particularly when sentences begin with a pronoun.

Routinely, the majority of the supposed key sentences extracted are of minor importance or completely irrelevant. Different summarizing programs filter out different “key elements” and in one case the most important idea was completely missed out or “overlooked” by all four summarizers tested. Looking at it from an end-user’s perspective, one can reasonably expect that all summarizing software copy out identical key sentences. Nonetheless, there is all too often a too great a difference in the sentences extracted. In one randomly chosen case, there was only a 30% agreement on the text extracted (measured in number of words) between two summarizers. With longer texts, results were even more varied, which casts some doubt on the algorithm-based selection mechanism employed by different software makers.

When the nature of the original text makes summarization software look good

There are two examples exemplifying that the nature of the original text can make any summarization software look good. In one of the tests there has been an acceptable level of computational sentence extraction achieved. The other example was appendixed to an academic draft paper for easy verification. In the first case, it is the high number of equally relevant key ideas given when the choice of sentences extracted did not matter. As a result the summary is balanced and even human agents could have made subjectively tinged choices without seeming to have missed out key ideas. The second example from the academic draft paper is entirely written in reported speech and gives the appearance of being connected and relatively coherent. It can be deduced that reported speech or reporting structures with different introductory verbs or phrases serving as semantic links which provide local text cohesion is “summarization software friendly” in general. Or distort test results accordingly as these semantic links were written by human agents and duly filtered out or copied by summarizers. Thus it would be an example of extreme partiality to pass this copying process off as a software achievement. All things considered, I think that the test results give rise to speculation about whether acceptable sentences extracted are just fluke hits and, therefore, no final conclusions can be drawn.

 There is the issue of the optimum text length for machine-generated summaries. When done by human agents, full précis writing is usually about 1/3 in text length as opposed to partial or incomplete summaries which concentrate only on certain thematic aspects. It is save to assume that this is the optimal text length for full summary writing since it has stood the test of time. Perhaps the standard setting for machine-generated text abridgments should be raised from 25% to 35%. Together with the next-generation of AI software, this is likely to render better-quality extracts and provide a better balance of key elements extracted, particularly in long texts (1000 or more words of original text length).

Summarizing software as a first-stage drafter for human-agent précis writing – currently an act of faith and not quality editing

Summarizing software is also meant to be a kind of first-stage drafter for human-agent précis writing, providing a short-list of ideas with the human agent smoothing them or bringing them into a more acceptable, i.e. coherent format. At least, this has been predicted by some linguists. The commercial software tested is not suited for this purpose and I do not know if there are more sophisticated summarizing programs other than those commercially available. If summarizers are ever used as fully functional first-stage drafter, the role of human agents would be confined to just connecting and polishing sentences extracted by summarizers into a readable, coherent format. In this case, human agents would serve as mere text editors without having to read the original text themselves. Anything else would be self-defeating and make summarizing software redundant. This also poses the question whether human agents may use machine-extracted draft-summaries in good faith as a basis to rely on to produce coherent abstracts – be they on a higher level of abstraction or just barely edited machine extracts ‒ without reading the full text, which would then be an act of faith and not an act of reason. Present-day summarizing software is not up to par to be used as first-stage drafters and I am very much interested to learn if the next generation of summarizers will still operate on a lower order of “thinking”.

Disconnected and incomplete summaries – a new way of processing information?

On the subject of scattered and disjointed and incoherent sentences in summarizer-software generated summaries as discussed in this analysis, there are new, related phenomena to be observed in other areas. According to the linguist Raffaele Simone, “a new way of processing information” has developed marked by the predominance of less complex over more complex structures. Incoherent machine-produced summaries with disjointed sentences are certainly less complex than coherent human agent abstracts. In a wider sense, bulleted lists and the limitations of MS PowerPoint and similar software are further points in case for this new way of processing information. Software used for presentations is conspicuous by limited writing space hardly suitable to be the carrier of more complex ideas. Further examples of this new way of processing and presenting information can be found in some UK tabloid online newspapers with “uncluttered” single sentences displayed with generous spacing but without paragraphs. In education, a new kind of exercises in language teaching which favours matching and arranging unconnected or isolated sentences have to a large degree replaced more difficult and long comprehension exercises. Thus they constitute less complex structures, facilitating quick visual perception of easy-format alphabetical information “at a glance”.

 A retrograde evolutionary step?

In what way these developments are to the detriment of higher thinking order capabilities in human agents can at this point in time not be objectively established for the absence of any studies readily available on this subject, save the reports on the alarming decline in average intelligence among 18 year olds (2008). This was verified by two reliable German sources. Moreover, it stands to reason that a shift from traditional summary writing involving higher order thinking skills to accepting machine-made, disjointed extracted sentences in summary writing and (unintentionally) dismissing the training of abstract thinking faculties in education as negligible, may be an evolutionary throw-back starting any time soon. However, I should point out that in no way am I insinuating that some kind of deliberate behavioural conditioning is going on to adapt human agent mental capacities to the limited, number crunching properties of software.

User acceptance – what is really known about what users think?

With summaries generated by software being as unsatisfactory as they are, it is surprising that there are no verifiable test results or critical reviews readily available. Little is known about what users really think about the quality of summarizing software. Perhaps people have different views about what a key idea is or they are satisfied with partial and irrelevant extracts when they find what interests them. Or they may fill in the gaps left in machine-generated summaries from their own prior knowledge and experience, thus correcting faulty summaries or supplementing missing information while reading them without bothering about quality. Maybe users assume summaries to be good and / or it is their unshakeable belief in computer experts and their software which makes them accept anything machine-produced because, having grown up in a largely uncritical environment, they do not know otherwise. Furthermore, it cannot be precluded that some users may want to vent their dissatisfaction with deficient summarization software but they lack the ability to find the weak spots and articulate their frustration accordingly. More discriminate users may be resigned to putting up with what they deem to be barely mediocre software-generated-summaries due to their low level of expectation as they have become accustomed to not expecting too much.

 What compounds this issue is the fact that the human brain tends to attribute sense to any “input”, meaning that even downright wrong summaries can be interpreted as “intelligent” and well-founded because users assume that the computer is infallible, and hence summaries make sense to the person reading them. This fact was confirmed in a test-series of trick-lectures which did not make any sense at all. Yet, educated native speakers found the lectures “comprehensible” and “stimulating” and believed in the authority of “Dr Fox”, an actor hired for this purpose.

Software generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out

The evaluation of the test results with regard to intellectual properties ascribed to summarising software was, of course, a foregone conclusion. The difference now is that I have shown in some detail the difference between how human agent summaries are created and software summarizations are generated. The machine generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out. Generating extracts the way they do at present, summarization software is dispensable, the main reason being that they completely lack higher order “thinking skills”, properties indispensable for recognizing key messages and conceptual ideas in a text. At present, summarization software could not even be used as first-stage drafts for human agent editing. I think that users will have to wait for the next generation of AI intelligence software before summarizers can be fully relied on. Hopefully, the next generation of software takes data processing to a true level of natural language processing. Until such time, one had better use the advanced search function in search-engines to pre-select topics of interest and rely on one’s own close reading and / or speed reading techniques.

1 SUMMARIZATION: SOME PROBLEMS AND METHODS, John Hutchins, University of East Anglia: [From: Meaning: the frontier of informatics. Informatics 9. Proceedings of a conference jointly sponsored by Aslib, the Aslib Informatics Group, and the Information Retrieval Specialist Group of the British Computer Society, King's College Cambridge, 26-27 March 1987; edited by Kevin P. Jones. (London: Aslib, 1987), p. 151-173.]

July 2011

A Licence to kill Standard English?

Local Englishes – A sort of local linguistic inbreeding?

 Are we sleepwalking into a world of incomprehensibility? Are current trends in language development a retrograde evolutionary step? Will “local language needs” develop into some 200 different Local Englishes and replace Standard English? It seems that “local needs” have already done so in as many countries as there are official languages as any objective analysis would reveal.
Why should the English Language, in its course of evolution or perhaps devolution, also need “to take account of local language needs” in all countries all over the world, leaving us with some 200 varieties of Local Englishes? Is Standard English not good enough?
Dorothy L. Sayers, the famous crime-story authoress wrote in 1936 the following about the advantages of the English language: “The birthright of the English is the richest, noblest, most flexible and sensitive language ever written or spoken since the age of Pericles. […]. The English language has a deceptive air of simplicity: so have some little frocks; but they are not the kind that any fool can run up in half an hour with a machine.
Compared with such highly inflected languages as Greek, Latin, Russian and German, English appears to present no grammatical difficulties at all; but it would be truer to say that nothing in English is easy but the accidence. It is rich, noble, flexible and sensitive because it combines an enormous vocabulary of mixed origin with a superlatively civilised and almost wholly analytical syntax. This means that we have not merely to learn a great number of words with their subtle distinctions of meaning and association, but put them together in an order determined only by a logical process of thought.”

With regard to more complex language, it is my experience that seemingly convoluted, circumlocutory, or verbose language – although it does occur – is very often a compact chain of thoughts with logically ordered ideas. Conciseness requires a different functional vocabulary and different grammatical structures and intelligible language cannot be reduced to the lowest common denominator. This would be tantamount to using ambiguous catch-alls devoid of their established dictionary meaning when precision and accuracy are called for.

A short introduction to the concept of “Local Englishes”

On the website of one of the most distinguished publishers of academic books and dictionaries, Oxford University Press, there used to be an interesting section on the development of the Englishes with an interesting prediction. It purported that the number of non-native speakers of English would soon outnumber native speakers of English with significant consequences. In the course of being assimilated by other nations and societies, “[…] English develops to take account of local language needs, giving rise not just to new vocabulary but also to new forms of grammar and pronunciation”. To compound matters, it was predicted that “At the same time, however, standardized ‘global’ English is spread by the media and the Internet.”
Unfortunately, the author or authors of this text do not specify what the elusive “local needs” may be and what their justification might be, thus leaving ample room for speculation. Besides, this poses the legitimate question if there is an essential need at all for the about 180 to 200 potential local variants or “Local Englishes”, ranging from Amhavic and Balochi English and Kyrgyz to Zulu English. And one is left to wonder if Standard English is not good enough and needs to be improved by non-native speakers.

They all look like English, they sound like English, but they are not Standard English. Being the outlandish varieties of Standard English, they are often ambiguous and frequently tend to resemble verbal puzzles. In many instances, not even native speakers understand these sorts of English. They are often unnatural, substandard, incomprehensible and so deficient that no responsible parents would ever expose their offspring to it if it were their native tongue. They are marked by artificial non-native constructs (grammar and collocations), fancy new words no one can understand, and a novel approach to pronunciation. Thus, they become an obstacle to communicating effectively in both written and spoken English. Guessing the meaning of what is being said becomes the main skill needed to communicate after a fashion.

The process of generously taking account of “local language needs” has been going on for decades. In 1982, a harsh letter to the editor by a conference interpreter was published in the International Herald Tribune. In his letter, the writer states that in his daily work he sees close-up the English language disintegrating into unintelligibility at an alarming pace. He also says that he is often asked to render an interpretation of the “English” spoken by delegates who thought that a few years` secondary school qualified them to cope with the most disarmingly subtle language in Europe. He bemoans the absence of any protest by native speakers at the gibberish he is often subjected to. The French, on the other hand, hold the exact opposite view in this respect, maintaining that language is difficult, verges constantly on treacherous ambiguity and, for that reason, requires study. Whereas the English have always given the world the impression that any fool can speak English – and any fool now does. Please note that these are conference interpreter’s words, not mine.

Is there a need “to take account of local needs” ?

Over-simplified Local Englishes with mutilated grammar, weird new words which no speaker of Standard English can understand, and a novel approach to pronunciation, are developing fast. They are confusing to even speakers of Standard English. Not only are they an obstacle to communicating effectively in written English but also in speech.

Often, one is forced to ask the speaker what he actually means if one is interested in what is being said. However, this kills a conversation and in many cases, some people just nod their approval or say “yes” in the right places while trying to guess what is being said. In doing so, one reduces a meaningful conversation to a social function where the gap between intended and interpreted meaning becomes unimportant. More often than not, this sort of English is too broad and ambiguous, leaves too much room for guessing, and asks for a high degree of patience and goodwill. In the same manner, it may put a high strain on the listener, is marked by frequent backtracking and asking for additional information in many cases. I dare say that the faster the new variants of English develop, the more acute this problem will become.

My dictionary of “Local English, German Version”, although being partly written with tongue in cheek, is the first attempt at documenting the nascent state of the hitherto hard to define and hard to pin down “Local English, German version” or “German English”. Examples used to be confined to teachers` lounges, faculty rooms and “high-security” translators` offices but thanks to the internet, the fun can now be shared uninhibitedly across the globe.

School English, Denglish, Basic Global English and Globish – all deviants of Standard English – are likely to continue to merge into one unified system of organised balderdash, with large parts of the English language as you know it, changed beyond recognition. Dominant  contributing factors are unedited documents and publications – frequently of international validity -, which are passed off as Standard English but in fact they are written by non-native speakers of English often in substandard, mutilated, and therefore difficult English. I have often wondered if translation source texts written by non-native speakers of English may not be an insult to any court if these documents would have to be submitted in the course of any legal proceedings.

Non-native-speaker-teachers — among the blind, the one-eyeds are kings?

A Local English version or the German sort of English has been around for quite some time. It is considered “incorrect” English at school and becomes perfectly acceptable once formal schooling ends. There is ample proof of this to be found in all media and on the internet and can be documented, that is, downloaded, screen-shot, video-stream-taped and printed from many sources. New coinages, bastardized and corrupted words or phrases and other hard-to-understand snippets of Local English – all due to incompetence – are often used with child-like innocence and frequently give rise to great hilarity. Preliminary findings seem to suggest that the causes for “a local need” for substandard English can be traced to too low standards and the German speakers` unwillingness to learn English up to the level that can actually be achieved. Apart from that, willing learners are discouraged to learn or train English to the level that is actually achievable because there is no real incentive to become fully professional at it since the poor status quo is considered to be the benchmark and no pecuniary rewards are offered to those striving for more.

The noddie syndrome in foreign language education

Benign and permissive teaching methods are often aimed at over-simplifying Standard English and concern themselves rather with the social function of a language than with the precise and effective conveyance of information. Previous standards of competence which used to be required of those teaching have given way to a social-worker-style pedagogy which relies on nodding vigorously in agreement to all gibberish-like verbal outpourings and studiously glossing over all kinds of substandard written material. Implicit or open acceptance of inadequate language and varnishing low standards in all areas where English is used prevail. And generally, there is a conspicuous absence of any encouragement and incentive to do work or study on one’s own. Error-swapping in any kind of group work and even a refined sort of condescending encouragement of verbal balderdash on the part of those imparting English is a major contributing factor, too.

Perhaps learners are the victims of a society pandering to those unwilling or too lazy to learn Standard English, or society has, for various reasons, tacitly consented to succumbing to widespread incompetence. One could, however, call this neglect to take corrective action aiding and abetting this unrelenting language engineering process. Many people working in education and the language business and native speaker friends as well tell me that what I am doing here in my web log must be done. However, they cannot afford to argue against this process openly because they are dependent on the status quo situation. In the meantime, they continue to suffer in silence.

Interactivity guaranteed

Could anyone take an active part in the devolution of Standard English, and in the evolution of his or her Local English version? Would it, for instance, be possible for anyone to decide that he or she has changed Standard English Grammar as it was done a few years ago in the song lyrics during a European Song Contest. The song writer could invoke the proclamation published on askoxford.com as an authority and lay claims to inventing new forms of grammar necessitated by local, or even, when asserting oneself boldly, individual “needs”, because it would be too troublesome to apply the established Standard English code of communication? Millions of spectators were silently singing along the lines:
“…just can’t wait until tonight baby for being with you”. This is only one example out of thousands.

Millions of new forms of grammatical structures and words could thus be created, but I think that these approximately 200 local deviations intercrossing with one another may turn the English language into an indefinable, unnatural, substandard, and incomprehensible mass. Alternatively, would the local variants be implemented worldwide by the mere stroke of a pen on a set day – hokum spokum nonsensicum? Probably not. They are allowed unchallenged to creep in through the backdoor. They are developing right now under our very eyes and people working in the language sector are taking the brunt, yet they are obliged to turn a blind eye.

Denglish

Does Denglish represent a linguistic evolutionary step or is it just a passing folly; a pseudo-proficiency in English or just a means of showing off one’s language incompetence? Denglish is a strange mixture of English and German words or phrases. This sort of Continental neo-pidgin English is ubiquitous and most striking when put bravely into print. English words are adapted in keeping with the rules of German grammar and mixed freely and haphazardly with German, often lending a hilarious touch to the resulting muddle. However, it is when new English words coined by Germans or misapplications of otherwise correct English are thrown in that the effect becomes utterly uproarious. And to top it all, when Germans start to invent new applications for English words or even entirely new English-sounding words which no native speaker would understand, native speaker of English are in dire need of guidance through the Continental version of their mother tongue.
Not surprisingly, many of the Denglish coinages have made it into my dictionary “Local Englishes, German version” or “German English”.

Basic Global English – The lowest common denominator?

Basic Global English or BGE is a method to facilitate learning some sort of Neo-Pidgin English. It borrows from Standard English a basic vocabulary of some 750 words to which an individual bespoke vocabulary of 250 words is added to cover a learner’s interests and hobbies he or she can use to explain their world with. A maimed and mutilated grammatical system consisting of 20 rules replaces those structures of Standard English considered unmanageable by foreign minds. It also encourages the use of a sort of sign language and is not recommended for usage with native speakers of English. The most prominent propounders and popularizers of this deviant of Standard English are non-native speakers of English. With missionary zeal and great conviction, they are keen to create an artificial sort of minimalist official global language – a kind of Simplified-Simple-Speak even simpler than Globish. And it is considered suitable to serve as a lingua franca at the highest level among politicians, business persons and other decision makers since Basic Global English also caters for the business and banker market. A business Basic Global English version is also available. However, the one advantage of BGE is that even children and adults with special needs should be able to learn this runty form of pseudo-English.

Translators suffer in silence

As mentioned briefly before, unedited documents and other kinds of publications and websites, frequently of international validity, are written by non-native speakers of English and passed off as Standard English. Translators translating from non-native speaker English texts suffer in silence since the subject of mutilated, difficult, or hard to follow English is still a taboo subject. Having been made to believe that their English is better than it is, non-native- speaker-writers of such inadequate texts have no idea what problems they are causing.

I know from a reliable source that more than 50% of source documents to be translated into German have been written by non-native speakers of English. These unnatural local varieties of English look and sound like English, but they are not Standard English. They are often ambiguous and frequently resemble verbal puzzles and are extremely time-consuming. In many cases, they are undecodable with the author being the only person to know what his “innovations” or “coinages” or conundrums are supposed to mean.

Another striking feature is the fact that specialised languages seem to be on the retreat. I have seen too many source texts in non-technical fields, also by native speakers of English, when the authors tried to “use their own words” to describe complex processes for the lack of what used to be considered indispensable knowledge and I still remember the mental pain when trying to make sense of those mostly inept and cumbersome descriptions.

I have also seen translations by non-professional “Local English speaker” when literal, word by word translations were used. Or, when writing in a foreign language, the writers thought that the mere juxtaposition of words renders comprehensible bits of texts while no native speaker would ever use such artificial and “difficult” constructions.

Machine translations creeping in on the sly?

Apart from using hard to follow, difficult, substandard and mutilated English which is in most cases devoid of accuracy and subtleties, most non-native speakers lack the language competence to distinguish between Standard English and hard-to-understand and ludicrous machine-generated “novel” English. I have seen bits of machine translated text used in academic papers which did not make any sense at all. In other papers written by members pertaining to the same linguistic “Local English” group, I came across the sweeping statement that machine translation services will reduce translation costs for governments, a service that would be used by the “young and dynamic”. I feel inclined to add: and by the incompetent and gullible.

Machine-translated websites will continue to be a contributing factor to the often grossly negligent, sometimes deliberately ignored, or even calculated corruption of the English language. These translations are sometimes not even declared as such, and more often than not, one has no option but to read them if they are, for instance, support sites. Not only are they an imposition on the reader, but also a danger to all those non-native speakers of English whose knowledge of the English language is limited. They may take these excrescences for Standard English and pick up, wittingly or unwittingly, vocabulary, grammatical constructions and “stylistic refinements” they think worthy of emulating. Responsible parents may even consider blocking machinetranslated websites on child protection software programs to shield their offspring from the adverse influence of bad language just as they may do with websites showing adult content.

Besides, the uncritical belief in the authority of blinkered specialists and blind faith in the new authority of the computer may be another crucial factor contributing to the unsuspecting acceptance of substandard English one comes up against on websites. Incidentally, Google itself seems to be aware of the problems involved in machine translation software. For instance, it asks users in their translation section, “Also, in order to improve quality, we need large amounts of bilingual text. If you have large amounts of bilingual or multilingual texts you’d like to contribute, please let us know”. Google’s approach to machine translations is likely to pay off in the long run, since trying to emulate the human brain is likely to render better-quality machine translations.

Would a test seal for texts edited by native speakers be helpful?

For about two years, I maintained a blog at Yahoo’s 360 site before Yahoo gave it up. True to fashion, I posted a kind of warning with the caption “This blog is not written in native-speaker English but in Local English, German Version (or German English). Picking up of any errors is entirely at your own risk”. Soon I gathered from feed-back that this message was in fact counterproductive in that non-native speakers of English thought I was promoting Global English or the German sort of “Local Englishes”. Nothing could have been further from the truth and I realised that I must have failed wretchedly to express the mild sarcasm intended.

 So I reckon it would not be a good idea to repeat this mistake but I have been wondering ever since whether some kind of test seal should be used to mark non-native speaker texts when they have been edited by native speakers of English. During the past 6 months, I have seen only two documents out of several hundreds that were actually marked: “Edited by + native speaker name”.

 Conclusion

 Languages have always been subject to change and have evolved naturally over time, the emphasis being on “naturally”.  However, never before has this process been artificially accelerated and manipulated by a number of factors which have largely been ignored so far.

 It is surprising that such an important development in the English language leading to a uniform system of organised balderdash goes largely unnoticed and undisputed. For the absence of a suitable term, I have taken the liberty of dubbing this process “neo-pidginicity”. Native speakers of English are probably unaware that a kind of linguistic genetic engineering is going on right now, especially on the web. If they are aware of this they may underestimate the impact this may have on Standard English, while others may even aid and abet this engineering process or condone it with their silence.

 Raising awareness – among native as well as non-native speakers of English, of how language is helped to develop by tacit acceptance of present poor standards, poor non-native speaker translations of websites and documents of international validity, and inadequate translation software, may help contain the advance of incomprehensible and ambiguous non-native speaker English. However, this may be wishful thinking. It can be assumed that the development of “local Englishes”, with its likely 180 or so local deviations and indiscriminate acceptance as separate variants in non-native English-speaking countries is allowed to persist unchallenged.

What are the alternatives then to introducing local global English variants? Although the notion of being truly competent in English is an agreeable one, there are a number of reasons why it is not possible to impose higher standards, particularly not globally; the major obstacle being the absence of any incentive to become more proficient in English since the poor status quo is considered to be the yardstick. Furthermore, this would require study – a word which seems to be, together with others like “grammar”, “study”, and “homework”, out of bounds. Taking the other stance, that is, deliberately encouraging and perpetuating the status quo and thus, premeditatedly influencing the future development of the English language in an adverse manner by using deliberately sloppy language is not a solution either and smacks too much of cynicism.

Stigmatizing substandard language seems futile, yet I have chosen to do so in the hope of raising awareness among native speakers of English, most of whom have no idea that their tongue is being tampered with by non-native speakers. Perhaps I should emphasize that this is my objective, not my “mission”.

In the meantime, a rigorous analysis of what is going on at the receiving end in the teaching process at all levels, for instance, recording and analysing classroom activities, may assist in reassessing the status quo. Less reliance on the spoken word in unnatural settings, when people learn English, may help too. It follows that it is best that we continue to abide by the role model standards set by native speakers of English.

Self-help

I wonder why people always want the best software for their computers while they upload, more often than not, substandard learning techniques into their brains

Never before has it been easier to learn a language up to a “true” near-native level. One is left to wonder why, in the age of multi-media, easy access to reasonably-priced self-teaching textbooks and English courses, an array of online dictionaries and hardcopy or CD versions, and a few tried and tested comprehensive grammar books with many exercises, the “Local Englishes” in general, and German English in particular, are as outlandish as they are. What follows now may seem like oversimplified and often heard pieces of advice of yesteryear. Nevertheless, the fact that the principles underlying them have been known for hundreds of years speaks for itself.

Online-hardcover dictionaries and grammar exercise books

One of the most expert and prolific text book authors for Spanish text books, Wolfgang Halm, said in one of his books that for those wanting to acquire a comprehensive knowledge [of any given foreign language], there cannot be enough exercises. This statement coincides with my experience. The kind of mental gymnastics difficult exercises provide is essential to improving one’s language capabilities in all aspects. Contextual vocabulary work is crucial to acquiring a large functional vocabulary. Use as many dictionaries as possible. All too often, entries differ widely. There are more than fifteen monolingual and bilingual online dictionaries available that are based in the UK. Some people prefer hard-cover dictionaries including monolingual dictionaries apart from online versions. One-click translations are a poor substitute and work only with very easy texts. Those interested may want to go treasure hunting for 50 year old grammar bestsellers that offer many more exercises than those currently used in Germany. Interestingly, they were all published about 1960 — 20 years before the communicative teaching method came into full swing with its devastating impact on standards. Surprisingly, they are still available and if your local bookstore does not store them, try amazon.co.uk or amazon.de.

Using internet search engines for homework, essay-writing and more

Research with internet search engines has a great, hitherto untapped potential. You can edit any kind of text, check collocations, do contextual vocabulary work, get rid of pet peeves by copying and pasting into a word processing document as many examples as you need. Even grammatical constructions can be checked, exercises can be compiled. There is nothing better to get a good grasp of the language – much better than swapping errors in group discussions with your fellow students. You can do as many revisions as you like, adding ever more examples and word definitions you come across. Dictionary entries can be copied and pasted as well, even from those dictionaries installed on your computer.

And for good measure, you can fine-tune your techniques by preparing (again by copying and pasting) those fragments you want a voice reading software to read to you, even on your mp3 player. The free-ware software Balabolka is a good start to check out this kind of software. A better-quality software “Voice Reader Home” is available at

http://www.linguatec.net/onlineservices/voice_reader/

and costs € 50.00.

You will, however,  need to get used to working with “meaningful” fragments. This depends on your general knowledge and language ability. Bear in mind that the quality of sources is, initially at least, important. The URLs shown in the list of search results usually give you some idea.

Use the advanced search function because you would ideally need the domain box when you want results only from native speaker domains, the major domains being uk, ie, nz, au, ca, us. Yahoo offers you the option to search for more than one domain at a time.  Using domains will save you a lot of work going through documents written by non-native speakers whose documents are published on native-speaker- domains. If you prefer documents that are at least edited by native speakers of English, you need to open search results, especially on edu and uni domains, where foreign students publish their documents. As to the search technique, always use the box “the exact phrase” (yahoo) or “the exact wording or phrase”.

Shifting or switching round words, omitting or adding words, using the wild card may help you find what you are looking for. If this does not help, try changing the domains or search the entire internet by leaving the domain box blank. Then you will get domains like net, com etc which do, however, not show you whether they are from a native speaker domain. In this case, you would need to open the search results you find interesting to find out, if the document is on a native speaker domain. Should the need arise that you have to defend yourself against the accusation that you are biased against non-native speaker English (Local Englishes), you may wish to use my stock reply: “Without best input, poor output”.

Reading is not the fashionable thing to do but without reading texts that are not “easy” there will be no mastery of any foreign language. Avoid easy readers or magazines written in simplified or Germanised English.

Do not be afraid of specialist vocabulary. After systematically going through the first 100 pages of any given specialist book, you will have covered a lot of ground and then work will progress much faster.

About this posting
This posting is the last of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?
What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.
And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Available now:

A Dictionary of GERMAN ENGLISH or LOCAL ENGLISHES, German version

A Dictionary of “Local English, German Version”

Why native speakers of English are in dire need of guidance through the Local English (German Version) of their mother tongue

Basically Debased? Language Simplification in Action

How do Basic Global English, Globish, machine translations and other contributory factors to neo-pidginicity compare?

Basic Global English – A runty, genetically modified language?
When describing his Basic Global English, Herr Dr. Joachim Grzega sweepingly claims “that English words and phrases do and must differ from Standard English when English is used in intercultural situations.” Arguing from his non-native speaker position, Herr Grzega thinks that “we need a new concept of English as a foreign language. Several analyses of non-native/non-native discourse have shown that non-native forms are actually sometimes quite intelligible and do not impede communicative success, while other
non-native forms may cause communicative breakdown.” However, Herr Grzega regrettably fails to look into other causes for said communicative breakdowns, such as lax and low standards and benign teaching methods, pandering to learners who may be unwilling to improve their communicative skills.

Talking about his Basic Global English in action, Herr Grzega says: “In fact, there were only problems when a native speaker was present, as their nuances, metaphors, humorous asides and double entendres confused the non-native speakers,” says. Although it can be seen that these are not my words, I hasten to add that I do not support any notions of discrimination or even apartheid – English native speakers should not be excluded from discussions held by any group in whatever sort of English. But this is not all. Because “metaphorical expressions are often problematic, speakers, including native speakers, are advised to abstain from using them”, asserts Herr Grzega. Furthermore, he wonders how helpful expressions are “that cannot be interpreted word-for-word in lingua-franca communication.” Instead, apart from those considerations just mentioned, his advice to native speakers of English is: “use standard speech or general colloquial speech. Speak slowly and distinctly. Your sentences should not be too complex. You may support your utterance with body language… […] but without switching into foreigner talk”. All limitations described here also apply to Globish-Speak, Basic Global English’s older, yet stunted business-speak brother.

Last but not least: “Don’t make unexplained utterances that require insider knowledge”. Now then, if English native speakers should wish to acquire these apparently esoteric communication skills like body language [I think Mr Grzega means gesticulating, pantomime, and grimaces] and grimaces as a prerequisite to successful communication in Basic Global English and Globish, this would be no small feat.

When one compares the level of his Basic Global English with the quality of translations made by translation software, we find a common characteristic. Texts suitable for 10 to 12 year olds, which is about the level of Basic Global English and also Globish speakers, are easier for machines to translate. Does this mean that when we talk in Basic Global English or Globish that we use bot-like, factual and neutral words or catch-alls which have been emptied of dictionary meaning so that they might fit any experience the speaker would not take the trouble to define? Yes and no. Herr Grzega suggests that we also use body language to make up for the paucity of our Basic Global English diction. There is a lot of educational material available online, millions of pictures with a great variety of body signals and telling grimaces. And why not use my “Smiley-Speak”, which I was so facetious as to suggest in a blog not long ago to replace Basic Global English and Globish with. https://sanchopansa.wordpress.com/2009/07/22/will-smiley-speak-soon-be-all-the-rage/

Machine Translations
Being biased towards AI in speech and translation programs, I will, for the time being, not delve too deeply into machine translation software and confine myself to summarizing the most salient points:

• Machine translations are still in their infancy and just like children, they deserve our indulgence. They are bound to become better and better over time.

• There are different approaches to machine processing of written human language. I favour Google’s method because – be tolerant with my oversimplification – they try to imitate what is going on in a human’s mind by using as many human translations as possible as sample translations for their database together with other methods. In the long run, this combination of methods is likely to “yield” translations which are indistinguishable from human translations. By that time, they will also be able transfer connotational meaning, which is, according to its propagator, a major deficiency in Basic Global English Speak.

• As to the future development of the quality of machine translations, AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings. Moreover, I think that together with Google’s approach, this will make machine translations second to none over time.

Way back in 1983 I had the opportunity to test a translation software program at Hanover Fair, the world’s biggest industrial fair. The hype machine was in full swing and the software company boasted itself on having George Orwell’s “1984” translated by one of the first commercial translation programs. The samples distributed were impressive, however, they did not show the corrections made by human translators during the editing process. A proud and patronizing assistant asked me for a sentence I wanted to be translated.
“AI steckt noch in den Kinderschuhen”, said I rather self-confidently. “AI is still in its infancy“, would have been the correct translation. This is what I got: “AI is still in its child’s shoes”.
Having a probing mind, I was curious to find out how programming AI software has progressed in the past 27 years. I chose one of the many translation programs at random and had it translate that very same sentence again. This is what it came up with in 2009: “Ai still is in the child’s shoes.”

Microsoft’s grammar correction feature included in MS Word:
A contribution to changing natural speech patterns?

Generally speaking, any tools which may help to make life easier are a welcome relief from tedious work. One of these functions is the Microsoft Grammar Correction feature, which can be activated in addition to the spell checker (setting-option: Grammar & Style). I use this program because of my interest in AI or artificial intelligence, although it can sometimes be fun, too. In “Are you cross with me?” the program insists on “Are you crossing with me?” I am not sure if this is a machine’s way of asking me, “Are you on my side when it comes to crossing the difficult bridge from human to machine translations?” Well, that would be far too early to say but I am prepared to watch the progress of AI software with the open, yet critical mind of a discriminating end-user.

In order to make suggestions, and likewise, translate text into another language, a program needs to “understand” and “interpret” human language. All too often, appropriate and established usage of the English language seem, when put into programming rules, difficult and therefore hard to “understand” for machines. Naturally, the question arises whether it is legitimate to simplify any given language to accommodate the needs of hitherto imperfect interpreting and translation software.

Some suggestions made by this apparently smart grammar correction tool, however, are in direct conflict with long-established, naturally evolved language structures or patterns (rules). The novel rules suggested often seem to be a simplification of language and some suggestions to edit one’s text according to MS Word recommendations are rather striking in that they may change the very nature of the English language over time. The most non-natural rules are those concerning defining and non-defining relative clauses. These are difficult to master by most foreign language students – and in all probability by translation software, too. Whenever you write a sentence with “which” when it has the function of a defining relative clause, this message pops up:

„That or Which“
If the marked group of words is essential to the meaning of your sentence, use “that” to introduce the group of words. Do not use a comma.
If these words are not essential to the meaning, use “which” and separate the words with a comma.
• Instead of: Did you learn the dance, that is from Guatamala?
• Consider: Did you learn the dance, which is from Guatamala?
• Or consider: Did you learn the dance that is from Guatamala?

• Instead of: We want to buy the photo which Harry took.
• Consider: We want to buy the photo, which Harry took.
• Or consider: We want to buy the photo that Harry took

Both clauses, “We want to buy the photo which Harry took” and We want to buy the photo that Harry took imply that there are other photos for sale taken by other people than Harry. The two sentences are perfectly correct and limit our choice while the recommendation “We want to buy the photo, which Harry took” means that there is only Harry’s photo for sale; its essential meaning is not changed when the omit the non-defining relative clause.

There is also a recommendation as to the use of the passive voice. According to MS Word’s Grammar Correction feature, sentences written in the passive voice are to be rewritten into active ones without exception. “Passive Voice (Consider revising)” is the message popping up.

Both defining relative clauses with which and the passive voice are intrinsic parts of the English language. Their usage has grown naturally over centuries. It is almost impossible to do without these structures as a “truly advanced” non-native speaker of English, as they are among the most frequently used structures used by English native speakers. Only recently, I read a book on politics, written by two Englishmen and published in 2005. I was not surprised to find defining which clauses still alive and kicking while in academic papers in the “German Chapter of Local English” that clauses are generally used instead.

Regarding the “emphasizing –self” pronoun and reflexive verbs, MS Word Correction Program often changes them to normal personal pronouns without -self or –selves, blindly oblivious to the grammatical differences in meaning they have. “Emphasizing –self -/selves” pronouns are always strongly stressed and they are used for the sake of emphasis; generally to point out a contrast such as:
You yourself (i.e. “you and not anyone else”) told me the story.
compared with:
You told me the story.
If humans uttering a sentence like this think it important to lay emphasis on the doer of an action, why should machines not keep the original idea when “interpreting” and “correcting” language?

For the time being, it is beyond the grasp or analytic or interpretational power of machines, be they grammar correction software or translation machines, to consistently distinguish between such shades of meaning let alone connotational meaning. However, machine translations will become better in time, if not human-like, while Herr Grzega and other non-native speakers of English are making every endeavour to simplify the English language into a mutilated, indistinguishable and incomprehensible mass. The same holds true for Jean-Paul Nerriere`s Globish-Speak. He is author of the book “Don’t Speak English – Parlez Globish”. In theory, both sorts of genetically modified corruptions of Englishes are not Pidgin Englishes, so the theory goes. But as it is often the case, theory and practise differ substantially in the day to day interaction among interlocutors. Both, Basic Global English and Globish are runty forms of Standard English; two kinds of immature-speak full of words and passages which are frequently hard to understand. Just like in present-day machine translations, connotational meaning cannot be conveyed in Basic Global English and Globish, while the latter have the advantage of integrating body language and grimaces into their semantic structures.

Herr Grzega considers his Basic Global English to be a minimum requirement of linguistic skills for “global peace and global economic growth”, and if I may ask: global brainpower as well? However, in his noble attempt at promoting global and cross-cultural language competence in an “atmosphere of trust, tolerance, empathy and efficiency so that information can flow without obstacles”, he seems sublimely unaware that it is his runty form of English, his mutilated Basic Global English with a paltry vocabulary of 1000 words and grammar reduced to a measly 20 rules which constitutes this very obstacle.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

All Fun and Games? The Fun Factory in Foreign Language Education

A giant playground for giant kids?

In an age, where financial wizards, bankers and business persons are called “players” or even “global players”, top manager or market-leading companies “key-players”, and an almost bankrupt company “is suddenly back in the game”, one is inclined to speculate about the origin of these voguish words. The latest coinages are “theatre” to describe the battlefields in Afghanistan, and “decompression time” – just like after a pleasurable dive in some exotic place – to explain the short time soldiers spent between a season (in keeping with the idea of leisure time) in a “theatre” to chill out before returning to their home countries.

What may be the causes of these ever-present and verifiable symptoms? Playing computer games indiscriminately may be one. Excessive game playing in language education and often, as a result of this, a lack of seriousness may be another. But in what way may other educational tools such as computer software, which all too often appears to be still in its beta-stage, with error messages popping up most of the time, generally contribute to fostering a rather lax attitude? In what way does this affect the pliable minds of the young when they grow up with imperfect hard and software? Do these mistakes, errors, flaws, faults or whatever we may chooses to call them, take on a different meaning and we come regard them as natural, unavoidable occurrences? And how does an all too easy-going attitude generally impair our ability to predict, analyse and pre-empt problems? How do games in language teaching mould the characters of learners or students? How does a generation fare when it has grown up with computer games and lots of “gaming experience” in and outside the classroom when they enter the job market?

Developmental and educational games in foreign language education
How effective are they? That would obviously depend on the sort of questions one is prepared to ask. My criteria have not changed over the years:
How much time is spent on playing games? What do I get out of them in terms of quantity and quality? How many contextual phrases and other meaningful, contextual fragments, and synonymous expressions etc. have become part of my active vocabulary? And, if no hand-outs are provided, have I had a chance to copy down those words, phrases or fragments or even entire interesting sentences for future reference to work with at home in my own time instead of relying solely on the elusive spoken word in class?

One of the most useless games in language learning I have ever taken part in was about 25 years ago. What was the point of cutting up a newspaper article, distributing the clippings to the students, making them read out the snippets and having them put the newspaper article in its original sequence? Not one single new word was discussed and not one single definition was given from this difficult and otherwise suitable article from the London Times. And what bearing has this sort of exercise on the acquisition of a foreign language, of what goes on in our minds when we want to increase our vocabulary? I would expect to find this sort of game in an assessment centre to test participants for characteristics like leadership qualities or their ability to fall into line in a hierarchical set-up but not in a language class. Incidentally, none of the participants complained about this novel idea of doing vocabulary work and I am not sure how many were aware of this and preferred to suffer in silence.

Another high-light was when a native speaker of English handed out about 15 idioms, in this case pertaining to one group – duly cut up –, asking the class to match the definitions with the idioms. No hand-outs were given to us and I had to hurry to copy down those three idioms I did not know. What a waste of precious 45 minutes! I almost forgot to mention the fact that students were supposed to discuss their viewpoints among themselves with the “supervisor”, or rather holiday camp animator, exercising utmost restraint all the time. I was under the impression the supervisor was having a good time in abiding by the rules of a theoretical model which, together with peer editing, group discussions and any other forms of error swapping, fosters a kind of “local linguistic inbreeding” and deludes learners into thinking that they are learning Standard English.

In order to give students an opportunity to pass away the time in the class room, text book publishers changed the format of many text books and made them unnecessarily larger, catering for a need learners did not know they had: full-colour editions of text-books with lots of empty space to scribble onto. Although the latter can be fun, too, especially when you are the ambitious type and design your own Rorschach tests. I guess that the print normally used in, for instance pocket-book sized text books, would make them balk at reading altogether, or in other words, it would remind them of „serious work or study”, which seems not the fashionable thing to do and above all, do not hold any promises of “fun”. It is no surprise to see these books with cartoons in adult education and I wonder, how many trees could have been saved in the past 30 years.

One result of “creative” games may be that games help to make anything which is uttered “ingrained”. Yet, little control of “quality input” is exercised due to the nature of games. Evermore games are invented as if the novelty factor were the decisive criterion. One sometimes gets the idea that there is a never-ending competition of inventing ever new games going on among educators while the number of games really useful have long been exhausted. A native-speaker friend of mine who had worked as a teacher of English in Hannover for several years told me that weekend seminars for teachers were held on the North Sea coast for the sole purpose of learning new games for use in the classroom. It must have been great fun for the participants, adult-sized-kids as it were. One has to concede, however, that useful games may have their place in pre-school education.

Only recently, an acquaintance of mine who has no formal teacher’s training told me that he had volunteered to host a discussion group for migrants. The fun-factor was important, he had been given to understand. And the most important thing was to just make the participants talk without talking too much himself, he told me with a smile of resignation. He had been admonished not to interfere in the free flow of ideas exchanged among the participants only to find that his charges conversed in a mutilated, difficult, hard to follow and often incomprehensible Pidgin sort of German. As a result, he sat there all the while, reluctantly nodding in agreement the gibberish emitted from eager, yet incompetent mouths. They did not know otherwise.

No wonder that he threw in the towel out of frustration after about four weeks. It was simply beyond him why it was perfectly acceptable to subject learners to bad language, to bad model-sentences, to bad snatches of speech, to bad pronunciation, to bad collocation, to very bad grammar and to an extremely poor vocabulary and style. In fact, so bad, no parents would subject their children to it, if there native-tongue were concerned. And he concluded that, up to a certain level, one would probably find this in almost any classroom you might care to visit. I hasten to add that “bad” is used here, of course, in a sense of “a strain on the interlocutors, hard to follow, difficult or even impossible to understand.”

As we have seen, playing games and other modern methods can be fun for the learner and be the source of great hilarity to the critical observer. It would be remiss of me not to mention one incident when native-speaker text book authors wanted to have some fun too. On a work-sheet containing idioms and colloquial expressions to be imparted to eager students, wanting to learn idiomatic or natural English, it said with great pedagogic conviction: “You may sound odd if you use them”. Printed by the publisher, mind you, not a hand-written note by some disillusioned teacher. Not a word of criticism was heard at such balderdash. I, however, presumed to disagree, suggesting that it was not a very encouraging remark to put on worksheets to be distributed to students of English, especially not since the copy was taken from a text book published in England.

You would expect this sort of comment in support material for Basic Global English, which is, according to its inventor Dr. Joachim Grzega, not suitable for communication with native speakers of English. Generally, learners think that UK and US English is taught here in Germany throughout and many pupils and students would be very disappointed to learn if “Basic Global English” and its somewhat older relation “Globish” were introduced on the sly through the back door.

“Use your own words”, said in a minatory voice, as if it were an offence to use newly acquired vocabulary is another rule straight from “The Book”. Using ones` own words must be more fun, I concluded, because of the implicit “seriousness” (equals absence of fun) inherent in building up a large diction. By implication this rather arrogant instruction means: don’t take the trouble to employ those words you might have just learned, if I had not prevented it, that is, do not enlarge your vocabulary, do not increase your power of thinking.” It is common knowledge that every single word is a tool to do your thinking with, the more tools you have at your disposal the more powerful your thinking will become. Conversely, reducing and limiting one’s vocabulary would be a retrograde evolutionary step.

The following example is about a foreigner who made other peoples` words his own and who did seem to get a certain degree of fun out of it. When I was about 14 years old, I met one of the so-called “guest-workers”. He was Italian and must have been about 40 years old. Apart from his open-minded relations with Germans, which was very unusual at the time, I was struck by his excellent German. He spoke with great precision, had a large vocabulary, impeccable grammar (hold your horses, I know what you are thinking) – that is, qualities contributing to clarity. In the course of our talk, he pulled a notepad and pen out of his pocket and asked me about the meaning and spelling of a word I had just used. He then wrote the word into his notepad with great precision and care. Oddly enough, it did seem like “fun” to him and I asked him, what else he did to improve his excellent German. “It’s great fun listening to the radio. I like reading newspapers as well, not the tabloids, though”, he told me with great conviction.

As to the taboo word grammar, I once met a German who was a very fluent, a fast talker with a large vocabulary. All the while he was churning out his words, he seemed to have great fun. But not those interlocutors of his who took an interest in what he was saying and did not just nod him off in the right places without understanding much. My complaint may not be politically correct but listening to him was a terrible strain because he made so many grammatical mistakes that they were actually an obstacle to comprehending what he was trying to say. According to the doctrines of the modern pedagogy, he must have been a one-off because “The book” says that with time and practice, mistakes will disappear. With him, they had become ingrained – a fact that is frequently overlooked. Now I dare ask a bold question: if you say something grammatical wrong over and over again, how can it ever become right?

To most questions posed at the outset of this post, I can offer no answers. And those I offer, tentative as they may be, probably fall short of general approval. The moderate use of games in the classroom can be useful, especially as a break from long hours of learning. However, in most cases games are time-consuming and yield little measurable results. As to the problem of how a game-playing “culture” may affect society on a wider scale in terms of its brainpower and economic performance, ex-chancellor Kohl put the dilemma very succinctly about fifteen years ago:
“Germany is a huge amusement park”.
One is inclined to add now: operated by professional teenagers.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art.