Summing it up “intelligently” or simply “copying and pasting” sentences?

Summarizing Software  versus  Human Agent Précis Writing

 To what extend do summaries generated in a natural test environment live-up to product descriptions and how do they compare when they are pitched against summaries written by human agents? And how do the four summarizing products tested compare among themselves? Do different summarizers come up with the same results when fed the same text? So, is it plain low-level algorithm-based “copying and pasting” techniques versus higher order thinking skills in humans? In an analysis running 100 pages, I set out to find out exactly that.

Throughout the tests, no traces of human-like intelligent capabilities have been found in machine-generated summaries

With regard to “intelligent” properties, summarizers do not live up to the promises made in product descriptions. Commercial computer software (summarizers) cannot produce summaries which are comparable to those produced by human agents. Throughout the tests, and not unexpectedly, no trace of human-like intelligent capabilities has been found in machine-generated summaries. The methods summarizing software use are plain low-level algorithm-based “copying and pasting” techniques generating summaries in an automaton-like fashion. They are not the results of a mental process; current summarization software is incapable of generalization, condensation or abstraction. Summarizers extract or copy out or filter out original sentences or fragments in the right sequence but content-wise in an unconnected fashion. Summarizing software cannot distinguish the essential from the inessential; it cannot abstract the essence of original texts and condense them into a restatement (onto a higher conceptual plane). Summarizing software lacks the properties of abstract thinking, of analysis and synthesis. It has no insight; it cannot interpret facts and grasp concepts, let alone the wider overtones of any given text e.g. deliberately used humour, sarcasm and irony or biased tendencies. It cannot order and group facts and ideas, nor can it compare and contrast them or infer causes. Neither is it capable of condensing text into pithy restatements nor can it reproduce text into paraphrased abridgments, nor can it recast sentences at the most elementary level for the time being.

Detailed analysis

Direct comparisons of human brain functions used in précis-writing and summarizing softwares´ algorithms get short shrift in academic papers, at least in those available on this subject. Readers interested in this subject, yet unaccustomed to reading off-the-beaten track topics, may find this text interesting.

Reporting structures are the most frequent structures used in any language, yet little emphasis is placed on this fact in education and (foreign) language training as any textbook analysis will reveal. “Summarization is one of the most common acts of language behaviour. If we are asked to say what happened in a meeting, what someone has told us about another person or about an event, what a television programme was about, or what the latest news is from the Middle East, we are being asked or invited to express in condensed form the basic parts of an earlier spoken or written text or discourse.”1 Often, such summaries are shortened or constricted or abstracted onto another level. But they are very often the same verbs, verbal phrases etc. as used in reported speech.

In this test series, I compared summaries produced or created by human agents – also called abstracts, synopses, or précis – against extracts generated or copied out by various summarizing software or programs (software agents, also called computational agents). All human agent sample summaries have been taken from Cambridge Proficiency Examination practice books (UK English). My point of departure was the hype by software companies who all too frequently endow their summarizing software with human qualities, bordering on “personification”. Most of the product descriptions and reviews would have us believe that their computing power is fully comparable with human brain power. We are promised that these programs can determine what the text is about, extract the gist of any text, pinpoint the key concepts, and reduce long texts to their essentials. What is more, we are made to believe that they can analyze a text in natural language, “taking into account its structure and semantic relationships” and even get an in-depth understanding of the underlying idea.

I have taken up the challenge posed by said overblown statements, which represent summarizers often as “intelligent”, and pursued the question if the various commercial summarizing software programs available are mere number crunchers based on algorithms simply extract or copy out sentences and fragments, or whether they possess some kind of artificial intelligence akin to that of humans. This important difference between abstract thinking in human agents and automaton-like properties of summarizing software is looked into in some detail and supported by stringent test results confirming the superiority of the human brain over the unthinking, machinelike properties of summarizing software.

Is software “intelligent”? – And are human agents “truly” intelligent?

In academic papers, product descriptions for commercial summarization software, and generally in the field of AI, the term “intelligent agent” is frequently used in connection with software or software components. The degree to which present-day computer software, and summarisation software in particular, is “truly intelligent” is seldom a principal object of investigation, be that the lack of supposed relevance or the unavailability of investigative papers readily available to the public. Summarizing software being artificial intelligence (AI) software is said to be capable of generating complete summaries (extracts) which are sometimes misleadingly called précis, synopsis or abstracts, all terms which, rather, describe human-agent- produced summaries. In this analysis I have addressed the issue of the often ambiguous hype surrounding summarizing software. All too frequently, it is openly or implicitly invested with a human-like intelligence.

 By gauging its performance in tests in which summarization software competes directly with human-agent-produced summaries taken from textbooks preparing for the CPE (Cambridge Proficiency Examination), I have explored its computational competence and the current state of supposed “intellectual” quality of results generated, or lack of both. Most present-day commercial summarizers and those tested in this analysis use the method based on extraction with summarizers copying out (copying and pasting) key sentences from the original text. In contrast, the abstractive summarization method is based on natural language processing, meaning the software needs to “understand” and “interpret” truly the original text and reproduce its most important information it in its own “words” in an abridged form. Present-day commercially available summarizing software using the abstraction method cannot do this satisfactorily and if there are any non-commercial summarizers in operation, they are difficult to check on.

Users hardly ever get complete, connected and readable summaries

Product descriptions assure potential buyers that summarizers can determine what a text is about, pinpoint core messages and key concepts and thus reduce long texts to their essentials. One software company even wants to make users believe that their summarizer can analyze text in natural language, “taking into account its structure and semantic relationships” and even get “an in-depth understanding of the underlying idea”. Furthermore, readers are promised that they can spend considerably less time understanding the general meaning of any document or website by just reading machine-extracted summaries without missing key information. However, the tests have shown that summarization softwares` machine-reading-comprehension properties lack accuracy since users hardly ever get complete, connected and readable summaries.

None of the summarizers generated reliably consistent, complete and impeccable extracts to be used as first-stage drafts for human agent editing

According to software companies, summarizers are mainly used as a time-saving reading aid, a kind of complete executive summary which supposedly allows the reader to spend considerably less time understanding the general meaning of documents, getting familiar with their structure and reading without missing key information. In order to meet the highest of standards, they would have to deliver consistent results and generate faultless and complete extracts. However, as it is often the case, theory does not square with practice at all since the tests show that the summarizing software tested is incapable of generating acceptable summaries due to a number of shortcomings outlined below. Neither was any of the summarizers tested able to distinguish itself from the others in any conspicuous way, save the number of irrelevant ideas generated, nor did any summarizer generate reliably consistent, complete and impeccable extracts which could be used as first-stage drafts for human agent editing.

 Almost always, summarizers will extract the first sentence or the first two sentences because they have been programmed to do so since these are deemed to be lexically loaded and contain the essence of the text. If the first sentence contains conflicting, subordinate ideas or anecdotal content, it can make the summary less useful if not downright wrong with the summarizer extracting the negligible first sentence(s) at the expense of more salient ideas which may then not be extracted because of settings limiting the choice. This matter is aggravated when the first sentence is long and contains trivial subordinate “ideas”. Summarizers cannot recognise these irrelevant parts and do not leave them out as human agents would. The latter holds true for all compound sentences extracted.

Extracted or filtered out sentences lack cohesion and resemble bitty bits scattered across the pages

Filtered out or extracted sentences lack unity; they are disjointed and scattered in list form across the page or highlighted in colour in similar fashion. Most of them have the appearance of brute-force-copied-and-pasted text fragments with the summaries generated by Microsoft Word’s summarizing function being the only exception. The latter constricts the sentences selected into impressive-looking paragraphs, thus making it seem that a lexical interrelationship between the “key” sentences selected is preserved. However, a more refined analysis shows that this may only be partially true. The fact that contextually unconnected sentences are placed one after the other under the false pretence that text cohesion is created or retained does not make a better summary or reading any easier. In one case a “novel” but wrong or misrelated grammatical relations was established which was not present in the original and which substantially changed the meaning of the extract under investigation. It is safe to assume that this is no isolated incidence, particularly when sentences begin with a pronoun.

Routinely, the majority of the supposed key sentences extracted are of minor importance or completely irrelevant. Different summarizing programs filter out different “key elements” and in one case the most important idea was completely missed out or “overlooked” by all four summarizers tested. Looking at it from an end-user’s perspective, one can reasonably expect that all summarizing software copy out identical key sentences. Nonetheless, there is all too often a too great a difference in the sentences extracted. In one randomly chosen case, there was only a 30% agreement on the text extracted (measured in number of words) between two summarizers. With longer texts, results were even more varied, which casts some doubt on the algorithm-based selection mechanism employed by different software makers.

When the nature of the original text makes summarization software look good

There are two examples exemplifying that the nature of the original text can make any summarization software look good. In one of the tests there has been an acceptable level of computational sentence extraction achieved. The other example was appendixed to an academic draft paper for easy verification. In the first case, it is the high number of equally relevant key ideas given when the choice of sentences extracted did not matter. As a result the summary is balanced and even human agents could have made subjectively tinged choices without seeming to have missed out key ideas. The second example from the academic draft paper is entirely written in reported speech and gives the appearance of being connected and relatively coherent. It can be deduced that reported speech or reporting structures with different introductory verbs or phrases serving as semantic links which provide local text cohesion is “summarization software friendly” in general. Or distort test results accordingly as these semantic links were written by human agents and duly filtered out or copied by summarizers. Thus it would be an example of extreme partiality to pass this copying process off as a software achievement. All things considered, I think that the test results give rise to speculation about whether acceptable sentences extracted are just fluke hits and, therefore, no final conclusions can be drawn.

 There is the issue of the optimum text length for machine-generated summaries. When done by human agents, full précis writing is usually about 1/3 in text length as opposed to partial or incomplete summaries which concentrate only on certain thematic aspects. It is save to assume that this is the optimal text length for full summary writing since it has stood the test of time. Perhaps the standard setting for machine-generated text abridgments should be raised from 25% to 35%. Together with the next-generation of AI software, this is likely to render better-quality extracts and provide a better balance of key elements extracted, particularly in long texts (1000 or more words of original text length).

Summarizing software as a first-stage drafter for human-agent précis writing – currently an act of faith and not quality editing

Summarizing software is also meant to be a kind of first-stage drafter for human-agent précis writing, providing a short-list of ideas with the human agent smoothing them or bringing them into a more acceptable, i.e. coherent format. At least, this has been predicted by some linguists. The commercial software tested is not suited for this purpose and I do not know if there are more sophisticated summarizing programs other than those commercially available. If summarizers are ever used as fully functional first-stage drafter, the role of human agents would be confined to just connecting and polishing sentences extracted by summarizers into a readable, coherent format. In this case, human agents would serve as mere text editors without having to read the original text themselves. Anything else would be self-defeating and make summarizing software redundant. This also poses the question whether human agents may use machine-extracted draft-summaries in good faith as a basis to rely on to produce coherent abstracts – be they on a higher level of abstraction or just barely edited machine extracts ‒ without reading the full text, which would then be an act of faith and not an act of reason. Present-day summarizing software is not up to par to be used as first-stage drafters and I am very much interested to learn if the next generation of summarizers will still operate on a lower order of “thinking”.

Disconnected and incomplete summaries – a new way of processing information?

On the subject of scattered and disjointed and incoherent sentences in summarizer-software generated summaries as discussed in this analysis, there are new, related phenomena to be observed in other areas. According to the linguist Raffaele Simone, “a new way of processing information” has developed marked by the predominance of less complex over more complex structures. Incoherent machine-produced summaries with disjointed sentences are certainly less complex than coherent human agent abstracts. In a wider sense, bulleted lists and the limitations of MS PowerPoint and similar software are further points in case for this new way of processing information. Software used for presentations is conspicuous by limited writing space hardly suitable to be the carrier of more complex ideas. Further examples of this new way of processing and presenting information can be found in some UK tabloid online newspapers with “uncluttered” single sentences displayed with generous spacing but without paragraphs. In education, a new kind of exercises in language teaching which favours matching and arranging unconnected or isolated sentences have to a large degree replaced more difficult and long comprehension exercises. Thus they constitute less complex structures, facilitating quick visual perception of easy-format alphabetical information “at a glance”.

 A retrograde evolutionary step?

In what way these developments are to the detriment of higher thinking order capabilities in human agents can at this point in time not be objectively established for the absence of any studies readily available on this subject, save the reports on the alarming decline in average intelligence among 18 year olds (2008). This was verified by two reliable German sources. Moreover, it stands to reason that a shift from traditional summary writing involving higher order thinking skills to accepting machine-made, disjointed extracted sentences in summary writing and (unintentionally) dismissing the training of abstract thinking faculties in education as negligible, may be an evolutionary throw-back starting any time soon. However, I should point out that in no way am I insinuating that some kind of deliberate behavioural conditioning is going on to adapt human agent mental capacities to the limited, number crunching properties of software.

User acceptance – what is really known about what users think?

With summaries generated by software being as unsatisfactory as they are, it is surprising that there are no verifiable test results or critical reviews readily available. Little is known about what users really think about the quality of summarizing software. Perhaps people have different views about what a key idea is or they are satisfied with partial and irrelevant extracts when they find what interests them. Or they may fill in the gaps left in machine-generated summaries from their own prior knowledge and experience, thus correcting faulty summaries or supplementing missing information while reading them without bothering about quality. Maybe users assume summaries to be good and / or it is their unshakeable belief in computer experts and their software which makes them accept anything machine-produced because, having grown up in a largely uncritical environment, they do not know otherwise. Furthermore, it cannot be precluded that some users may want to vent their dissatisfaction with deficient summarization software but they lack the ability to find the weak spots and articulate their frustration accordingly. More discriminate users may be resigned to putting up with what they deem to be barely mediocre software-generated-summaries due to their low level of expectation as they have become accustomed to not expecting too much.

 What compounds this issue is the fact that the human brain tends to attribute sense to any “input”, meaning that even downright wrong summaries can be interpreted as “intelligent” and well-founded because users assume that the computer is infallible, and hence summaries make sense to the person reading them. This fact was confirmed in a test-series of trick-lectures which did not make any sense at all. Yet, educated native speakers found the lectures “comprehensible” and “stimulating” and believed in the authority of “Dr Fox”, an actor hired for this purpose.

Software generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out

The evaluation of the test results with regard to intellectual properties ascribed to summarising software was, of course, a foregone conclusion. The difference now is that I have shown in some detail the difference between how human agent summaries are created and software summarizations are generated. The machine generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out. Generating extracts the way they do at present, summarization software is dispensable, the main reason being that they completely lack higher order “thinking skills”, properties indispensable for recognizing key messages and conceptual ideas in a text. At present, summarization software could not even be used as first-stage drafts for human agent editing. I think that users will have to wait for the next generation of AI intelligence software before summarizers can be fully relied on. Hopefully, the next generation of software takes data processing to a true level of natural language processing. Until such time, one had better use the advanced search function in search-engines to pre-select topics of interest and rely on one’s own close reading and / or speed reading techniques.

1 SUMMARIZATION: SOME PROBLEMS AND METHODS, John Hutchins, University of East Anglia: [From: Meaning: the frontier of informatics. Informatics 9. Proceedings of a conference jointly sponsored by Aslib, the Aslib Informatics Group, and the Information Retrieval Specialist Group of the British Computer Society, King's College Cambridge, 26-27 March 1987; edited by Kevin P. Jones. (London: Aslib, 1987), p. 151-173.]

July 2011

Update: COMPUTER–AIDED EXPLORATION OF LITERARY STYLE AND MACHINE TRANSLATIONS

Computer-Aided-Criticism Software  and Machine-Translation Software—Problems and Potential

Summary

Understanding any text in all its subtlety is a prerequisite when translating from one language to another (and exceedingly desirable in literature appreciation).  Like human translators machine translation software should have this capacity. Computer systems have proven to be very poorly suited to a refined analysis of the overwhelming complexity of language. State-of-the-art computer software used in machine translation purporting to do just that still leave much to be desired, as one can easily verify by having translations done by any of the many machine translation tools available on the internet. Since text interpretation is the common denominator, machine-translation software is similar to the one used in computer-aided-criticism, if not identical.

My arguments highlight lesser known problems encountered in computer-aided criticism and may serve to foster a better understanding of present-day machine translation capabilities and its undeniably huge potential in the future, be that in 50 years or more. Machine-translation software and related analytical software are still in their infancy and just like children, they deserve our indulgence. They are bound to become better and better over time. There are different approaches to machine-translation processing of written human language in translation software. I favour Google’s method because – be tolerant with my oversimplification – they try to replicate what the brain does by using as many human translations as possible as sample translations for their database together with other methods (algorithmic mathematical languages). In the long run, this combination of methods is likely to render better-quality translations which will then be indistinguishable from human translations. By that time, this improved software will also be able to decode and translate connotational meaning.

For the time being, it is beyond the grasp or analytic capability or interpretational power of machines, be they computer-aided-criticism software, translation machine programs, grammar correction and summarizing software, to consistently distinguish between such shades of meaning let alone connotational meaning. However, machine translations will become better in time, if not human-like. AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings.

Background information

Although I wrote the main body of the text almost 30 years ago, it is surprisingly up-to-date due to the fact, that comparatively little progress has been made in this field. In January 1983, with only a modicum of theoretical background on computers and linguistics, I planned to write this essay as a fervent riposte the editor of the American journal Psychology Today, which had published a two-part series called, “The Imagination Extenders,” in November and December 1982. I never sent the letter and apparently no one else did, probably because computer software was only just beginning to emerge and therefore hardly anyone was capable of spotting the weak points in Mr Bennett’s arguments.  Here is an excerpt from the original article from Psychology Today:

Computer-Aided Exploration of Literary Style – A tool to a better understanding of literature?
In the two articles, the question is posed whether computers will be able to extend our imagination the way telescopes and microscopes extend our vision. Philosopher Daniel C. Bennett of Tuft University (U.S.) says they will in two ways: 1) by spreading the range of our senses and 2) by enlarging the amount of our concepts. He speculates that there are hundreds of telling patterns in the number system suitable for computer analysis and suggests that computers be used to study, among other things, literary style. He says that, being a rather clanking and statistical affair, analysis of word choice and style is a delight mainly to pedants and he wonders whether the subtle, subliminal effects of rhythm and other stylistic devices – often quite beneath the conscious threshold of their authors – can perhaps be magnified and rendered visible or audible with the help of a computer. The features the computer would heighten could be abstract patterns, biases of connotations and evocations or intangible meaning – not matter.
Incredible as it may sound, Mr Bennett’s bold claims went unchallenged. Not one single letter to the editor was subsequently published. I wish to emphasize that it is not my “mission”, but my objective to contribute to the discussion about machine intelligence from a practician´s point of view with years of experience in analysing texts – in the traditional manner.

Under close Scrutiny — Computer-Aided Exploratory Analysis of Literature

Literature appreciation by just reading for pleasure is one way of gaining meaning from a piece of art – formal literature analysis is another. Owing to the works of Jung and Freud, as well as novel approaches towards language from the new fields of neuro-linguistics and psycholinguistics, literature analysis can be highly rewarding, especially if combined with the notion of contemplation. An understanding of the interaction of the many literary devices and techniques is the more academic way of finding out what a writer says and how he says it. Therefore, style, which is the object of my exploration, can be an important clue to understanding “meaning” in a piece of literature. What is style then? Style is the outward reflection of the intrinsic sum- total of everything a writer is at the moment of writing. Style flows from an author’s character in its broadest sense and from his life experience.

Not only does a writer express ideas of which he is aware but he also reveals subconscious ideas and conflicts. Very often, he has no knowledge as to why he chooses a particular word over another – a word that may arrive at the threshold of his consciousness like a shooting star from a vast cosmos of subconscious beliefs, suppressed desires, cherished ideals, primordial instincts, mechanized scripts and from the plane of archetypal symbols before it is clad in reason and logic.

How would a computer know in what way style contributes to meaning?

Style is as individual as a fingerprint. No two styles are ever the same and very often, the same word or outward shell, the same sound pattern has an entirely different meaning when used by another writer or in a different context. How then, would it be possible for a computer to analyse style? Even if it should be possible to programme a computer to enable it to recognize hundreds of literary devices and to make generalizations from particular examples, how would it recognize or process the fingerprint of an entirely different writer who uses language in a new and original way?

How can a computer extract meaning from a writer’s three-dimensional web of associative meaning created by the power of one single word, if the computer knows only the husk or dictionary definition of a word but not its contextual essence, its personal and private elements, its fugitive associations and flashing connotations lived and experienced by the author?
The silent speech of metaphorical language, the body language of language (my own term, but I may be wrong there) of imagery and symbolism cannot be expressed in digital numbers or in any other form in the number system since there is an infinite number of possibilities of combining words and creating meaning in ever new groupings and juxtapositions. This problem is further aggravated by the fact that words do not mean the same to different people. Since no two contexts or situations in which words are learnt and used are ever the same, no two meanings, or in this case, interpretations, can ever be the same. One could argue that a computer would alleviate this problem in that it could be an impartial judge as to what meaning a particular word should have in all cases at all times. Yet, this solution would be unacceptable as language would be manipulated, unnatural and bland. Apart from this dumbing down of words, this would smack too much of George Orwell’s “Newspeak”. However, one does need to invoke fiction in order to fully understand the impact such action would have. ( The “bias and sensitivity guidelines” used by pressure groups in the US educational system afford a glimpse of what may be in store. Added in2006)

Mr Bennett speculates that subconscious notions expressed through the medium of style may be made visible or audible. Would this not signify that a writer’s most secret thoughts, sometimes even unknown to himself, could be projected onto a monitor? Moreover, could this rendering of a colour-coded graphic display representing conscious and unconscious thought-patterns or associative configurations be interpreted and fully understood? Would we need another expert telling the literature “expert” what the particular graphic display unveils? Who would decode the meaning encoded in the colour-graph? Who would interpret the computer’s interpretations of the, for instance, “delicate effects of sound”? The meaning to be unearthed from the colour-graphic display on the monitor would be as enigmatic and complex as literature is to many people.

In order to establish personality profiles, psychologists attempt to read a person’s subconscious mind by analysing his speech pattern, his choice of words i.e. his preferences. But it will never be possible to penetrate a person’s subconscious mind and read the pictures, the language in which the subconscious mind “thinks” or communicates. Mental, invisible images, the evocations of the conscious, semi-conscious and unconscious mind cannot be recorded. Abstract ideas, mental pictures produced by evocations and connotations flowing from the composite elements of style, or even from a single word, are not subject to the law of mathematics and cannot be caged in the number system.

Literature analysis is allegedly a delight mainly to pedants, says Daniel Bennett. Does this over-generalization not contain a number of dangerous and narrowing assumptions and suggestions? Could it not be misconstrued to mean that a more profound analysis or appreciation is tedious, done only by pedants and that anyone in his right mind should never attempt to appreciate and enjoy literature by taking a closer look at it than usual – and that an “expert” analysis should be left to the computer? Are such bold claims not preparatory to reshaping and simplifying human cognitive and intuitive abilities, leading us into a yes-or-no-response-Brave New World?

There is more to appreciating literature than counting words. It is an essential characteristic of the appreciation process that through the reading experience itself, by meeting with an author’s ideas, content and substance gain a quality they would otherwise not possess. This is because a reader brings in his own thoughts, his experience and his feelings. Marlon Brando, who started his career by playing Shakespeare on Broadway, said in an interview that unless the reader gave something to it, he would not take anything from a book or poem. One could not fully understand what a writer was writing about unless oneself had some corresponding depth, some breadth of assimilation. Computer-aided analysis of literary style may completely leave out the reactions of the reader. The responsibility would be shifted to the “expert” computer who would do all the thinking and linking. Will human and humane feelings in literature analysis be entirely discarded and computer-encoded responses become the controlled measure of all literary works?

How would a computer “communicate” with a piece of art? Admittedly, it could be fed with a few individual images and then be programmed to boil them down into generalisations which it would apply whenever it encountered a digital approximation of meaning pre-programmed or assigned to a particular word or combination of stylistic devices. If a more sophisticated programme should ever exist, it might even be able to match two or three literary devices from among the thousands of possible combinations and relate to a particular phrase, sentence, or paragraph. But how would the computer attribute sense to what it finds out? In its binary interaction with a piece of literature, the computer would compare its rigid, predetermined and static programme with the real world of natural experience and communication processes inherent in a piece of literature. How would a computer know, for instance, that a horse and its metaphorical or symbolic meaning in one piece of art might not be the same in another? In what way would a computer know why a particular word has been chosen over another and why certain words have been placed side by side to create a certain effect which may be lost if one word is exchanged for another? Not only must a great number of dictionary definitions and some of the most common examples of collocation be fed into a computer but it must also be enabled to distinguish non-compatible synonyms. However, the computer must also be able to “intuit” an author’s conceptual understanding of his private and personal usage of any word, even if his meaning varies only in very subtle degrees from common usage. Would it be sufficient to feed dictionary definitions into a computer, which are only a short abstraction of real-life usage? Would it suffice to give the computer knowledge about a sandbox-life which the computer could never relate to experiences of his own?

One has to concede that word choice is an intrinsic part of style. Still, how does the mere counting of words cast light on meaning? A key -word may be used deliberately or, sometimes, without the author’s awareness. It may be used only once and would gain significance in its context; it may be used soberly or passionately, prudently and meticulously, it may be used with missionary fervor or calculatingly only once, while another, quite insignificant word may appear frequently. This is because sometimes, even in this rich English language, there is a lack of other words that express the author’s intention adequately. How does one programme the computer to know that quantity or frequency does not equal quality or essence?

Technically, all stylistic devices of sound could be made visible on a monitor. Yet, would the computer “compute” the “right and only” meaning to them? Could it relate meaning created by specific sound patterns distributed over several pages to the main theme, to the pivotal points of that particular piece of art or could it judge them to belong to secondary ideas only? More importantly, would the computer be able to assimilate many other literary devices, such as symbolism, irony, puns, hyperbole, metonymy, oxymoron and above all, conceptual metaphors, which may all run parallel to meaningful sound-patterns, into a coherent context? Could the computer make intelligent distinctions between several possible interpretations that belong to the realm of surface meaning? More importantly, could it delve into deeper regions and soar into higher spheres by evocations and connotations created by sound patterns and other literary devices sandwiched onto sound? Would the computer be able to synthesize meaning from several layers of literary devices, from among the hundreds of composite elements of style, from the multi-level flow of sound and imagery?

How can the most common of hundreds of literary devices other than those describing sound – for instance, metaphors, similes, oxymoron and symbols – be made visible on a computer’s output device, since most stylistic devices do not function through sound? Literary devices, or rather stylistic devices (since most authors could not care less what they are called) are the author’s medium of expressing their ideas. They are their musical instruments with which they make images audible and they are their palettes transposing sound into images. Their artistic reflections, observations, contemplations or speculations, after having gone through the alembic of their inner worlds, become a unified piece of art and very often, the symbolic language used by authors renders their work seemingly unintelligible, requiring technical, perhaps arcane, and sometimes-abstruse knowledge on the part of the reader. Even the most dedicated readers of literature may not be able to understand a work of art in its entirety and some may not be able to see beyond its storyline. At the most, they may notice the pleasant effects that can be created by sound. How can a computer programmer, whose forte is probably not the appreciation of literature, programme a computer “to see” beyond sound, to read between the lines, or to recognize a sustained sound-pattern and describe its effects?
Above all, how does one programme the integrating principle which distinguishes between nonsense and sensible ideas, and which may, through flashes of insight or intuition, arrive at new ideas? That integrating principle has not been found yet – that all-important assimilation and joining-device that is capable of attributing sense to an infinite number and variety of external stimuli and to the internal, invisible, silent and yet ever-changing world of thought-configurations.

COMPUTER-AIDED LITERATURE STUDIES REVISITED IN 2011

Still Begging the Question after 28 Years

After almost 28 years, there is still no progress in the field of computer-aided textual content analysis. Computer systems have proven to be very poorly suited to a refined analysis of the overwhelming complexity of language. Conclusions drawn by people working in this area are equivalent to crystal-ball-gazing. Due to the absence of any results, the future role of computer-aided criticism is often still invoked. Up until now, computer-supported analysis of texts has not yielded any important or new results which cannot be obtained by close reading. Therefore, computerized textual research has not had a significant influence on research in humanistic disciplines. Many explanations as to why there are no useful applications with regard to the subject matter sound like feeble attempts to justify the use of computers in this field at all costs. Catchphrases used to this end are “a shift in perspective needed”, “asking new questions deviating from traditional notions of reading texts”, and “the need for new interpretive strategies” or “a modified reader response”. They all refer to hitherto unknown structures not readily apparent which are hoped to contain vital elements of literary effects. If they cannot be recognized by humans, are they important at all? This would be tantamount to assuming that authors create a “subconscious” pattern over sometimes even several hundreds of pages. What are we actually missing?

Stylistics and reader response seem to be treated as two different approaches and both methods are deemed “problematic” when it comes to assessing the literary effect measured in the text itself or in its supposed impact on the reader. The author’s intent expressed in his communication with the reader and meaning are deemed more difficult to quantify than matters of phonetic patterns and grammatical structures.

Authorship studies are one area where computers can be used effectively even though very little analysis is performed by the computer itself. Very small textual particles and word clusters selected and indexed by humans are run through computers to establish an “authorial fingerprint”. Thus, complex patterns of deviations from a writer’s normal rates of word frequency are measured. There are other patterns which can be used in such authorial tests like compound words and relative clauses.

Interestingly, professional translators hardly use machines to do their translations with. I know from my experience that it is harder to edit a machine-rendered translation than translating it again from scratch. However, translators sometimes use computer-aided translation devices such as Trados or Word Fast, which contain the complete memory of all translations a translator has ever done with this tool. With such a CAT tool (Computer-Aided-Translation), one can determine the number of words to be translated and hence suggested as one goes through the translation. As to the quality of such translations, very little research has been done in this field but I surmise that the resulting style may sound “bitty” if other than technical texts are translated.
It would be interesting to see how computer-aided-criticism software and machine-translation software cope with the hundreds of “Local Englishes”, and, being one of the major subject matter of this weblog, with “German English ” or  “Local English, German Version” in particular. Would this be too taxing a task as it is often for human translators when they need to waste a great deal of time guessing the meaning of idiosyncratic word coinages and very private grammar “novelties”? It must be a formidable task to write software that can handle the frequently faulty and incomprehensible English one finds in these hundreds of Local English varieties.

More Crystal Ball Gazing?

When I searched the Internet for the latest development in computer-aided exploratory textual analysis of literature, I was surprised to find that I had not been wide off the mark in my assessment 28 years ago. As to the future development, AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings.

Update: Dictionary of Local Englishes – New entries

For more dictionary entries and a more detailed description of the terms “Local Englishes, German Version” and “German English”, please go to go to Page:

A Dictionary of “Local Englishes, German Version”


Introduction

 Creeping in through the back door

Hardly noticed or acquiescently accepted by native speakers of English and non- native speakers alike, the many Local English versions, and in this particular instance “German English”, are often substandard, frequently unnatural or unidiomatic and therefore hard to understand, apart from being in many cases incomprehensible balderdash. All too frequently, words have become ambiguous catch-alls which have been emptied of dictionary meaning so that they might fit any experience the speaker would not take the trouble to define. However, one must admit that the latter is probably true for all languages.

Elevating mutilated, hard-to-understand and difficult English to the ranks of Standard Englishes

Local English coinages aggravate this situation. New words that sound and look English but which aren’t English are being invented with reckless incompetence and flaunted as indelible evidence of the true standard of German English in all media.

Some may call it a mongrel language, a bastardized, distorted and degenerated version of native-speaker Standard English while others may claim that it is mankind’s next step on the evolutionary ladder. The latter group of people will find this a welcome and useful guide to “enriching” the English language. The uncritical proponents of the doctrine “anything goes” may even think that Local Englishes are the definitive aid to increasing the word power and language proficiency of learners and students at all levels.

A university lecturer once wrote in her description for her “Varieties of English” university course. “It is not wrong English. It is just different…” Palliating, extenuating and explaining away the deficiencies of Pidgin-English-like Local Englishes is the easy way of dealing with this problem, thus elevating Pidgin English to the ranks of Standard Englishes. Conversely, the majority of native speakers are in all likelihood completely unaware of this downgrading of their tongue.

A sort of barely elevated Pidgin which sounds like incompetent patchwork

I began my humble collection of “different”, yet often hilarious English words about 10 years ago. That was about at the time when you would still find people criticizing openly the sort of English spoken in Germany. Today, you will be hard put to find realistic assessments like the following, made by Klaus Reichert the president of the “Deutsche Akadamie für Sprache und Dichtung” (Society for Language and Poetry) in a newspaper interview almost ten years ago: “What we take for English is often only a sort of barely elevated Pidgin which sounds like incompetent patchwork.” Mr Reichert`s noble intention to stem the “foreign infiltration” of the German language has, unfortunately, failed completely for reasons beyond his control.

All entries in this mock-dictionary are fully documented by either downloads, print-outs, screen-shots, newspaper clippings or copies of original fliers, brochures, etc.

New entries:

reservated

If you happen to see a taxi cruising any of the streets in Germany with a notice up in one of the windows saying „Reservated“, it will be no use hailing this particular taxi. Yet, many a social-worker-type-teacher of English may be inclined to say that it is “close enough” to the intended “Reserved”. According to the native-speaker radio presenter reporting this item of news, it was not correct English, but good enough for a German. Come to think of it, the expression “good enough” is sometimes seen on websites regarding the performance of a particular software, meaning that a particular software is not as good as it should or could be but good enough for a certain segment of the market.

“brain-up”

The “brain-up” initiative was launched in Germany in 2004 in search of Germany´s top universities. This overzealous drive at establishing new standards of excellence for Germany’s elitist universities has led to this award-winning Denglish coinage. To all those who expected a power booster pill wrapped in blue sugar-coating to help people sustain the peak of cerebral passion, it must have been vastly disappointing.

“Get out”

Announcements on Hannover’s trams are now made in impeccable English. But this was not always the case. More than 20 years ago, it was quite different when the then novel system of automated announcement was introduced. When reaching the last stop, passengers were asked to “get out”. Admittedly, you could expect to find a terse expression of this sort in Germany but this one was over the top. The effect was hilarious, caused quite a stir and was reportedly widely in local newspapers. When the culprit was asked about his word choice, the German translator said apologetically that it said so in a dictionary.

“…a few steps and you are in the green”

This word for word translation from German could be found on a website describing the location of a hotel. It was situated adjacent to a city-park on one side and a large city forest on the other. It only proves that such monstrous examples of German English are no longer confined to teachers` lounges and translators` offices but can be shared globally. In the meantime, the hotel in question has a website that was completely revised by a native speaker.

“…and follow the restrictions of the HACCP”

What a nasty nuisance these limitations are! I found this bit on a website when translating another. This example confirms the axiom than non-native speaker English all too often lacks the subtlety necessary to express issues that are of great concern. The HACCP (Hazard Analysis Critical Control Point) is an internationally recognised scheme of food safety standards. Here are some suggestions for a more responsible translation:

“…and abide by the rules and regulations laid down in the HACCP directives.”
or:
“ …and we meet the demands laid down in the HACCP regulation.”
or:
“… and our procedures are in compliance with the food and safety requirements of the HACCP regulation.”

„Please check your coats and bags“ (notice in a library)

Do not panic when you read something to this effect in Germany. You are not expected to check your grooming. Neither are you required to have a quick glance to see whether your bag’s body has a lovely sheen and is otherwise spotlessly clean.
In German English, this is used for „check in your coats and bags“.

“Please put out the television”

Again, no need to be alarmed. No need to go looking for a fire blanket or fire extinguisher. In all probability, your host’s television set is not on fire. In German English, it simply means “Switch off the telly.”

More highlights of “Local Englishes“

In case of fire, do your utmost to alarm the hotel porter. ( Vienna)

Please leave your values at the front desk. (Paris)

Customers who find the waitresses rude ought to see the manager. (Kenia)

Patrons are requested not to have children in the bar. (Norway)

The lift is being fixed for the next day. During that time we regret that you will be unbearable. (Bucharest, Romania)

You are invited to take advantage of the chambermaid. (Japan)

All the above highlights are taken from:
http://www.caterersearch.com/Home/ (2005)

Please to try the tarts. They are ready for you on the trolley.

This is from a flyer enclosed with the menu in a luxury hotel in Egypt.
(The Book of Mistaikes, Gyles Brandeth, First Futura edition 1982 (UK),
Copyright©) Macdonald & Co (Publishers) Ltd and Victorama Ltd.

Pending the outcome of nationwide discussion:

“She’s got a knuckle in her eye” (from the original song lyrics)

“Knuckle” is the bone of contention here. An extensive internet search with about 10 search engines did not yield one single collocation with “got a knuckle in * eye”. All results found are from the Local English domain “de” and refer to the song in question.  It would be easy to dismiss this coinage as easily classifiable as local gibberish. There are, however, two problems. The song writer is American but the term in question doesn’t seem to be native-speaker English.  Apart from that, many Germans had problems with this passage as the transcripts of the lyrics discussed in internet forums show. Before he official song lyrics had been published,
fans had replaced “knuckles” with “luck” and thus changed the meaning: “She’s got luck all in her eye”.

The official version goes like this: “She’s got a knuckle in her eye”. It appears that to those fans who had preferred “luck”, “knuckles” did not make much sense.
If there was some cock-up when the song was recorded, then we will never learn. Who would confess to being a bungling bunch of beta-performers? If it was really meant to be “She’s got a knuckle in her eye”, then the Local English in Germany will be enriched by a “meaningful and important” coinage.

And for good measure another bone of contention a bit further down in the song lyrics:

“He drops a pause”

An internet search did not yield any results at all on native-speaker domains. Nonetheless, in this case one could argue that the song writer has used his “artistic” licence. Incidentally, there was a similarly heated discussion going on on the German domain “de”. The point of this discussion was, again a novelty again due to devoted fans trying to transcribe the song before the lyrics had officially been published. The alternative to “He drops a pause” was “He drops a puss”, again due to bad pronunciation on the part of the singer.

For more dictionary entries go to Page:

A Dictionary of “Local Englishes, German Version”

at: https://sanchopansa.wordpress.com/a-dictionary-of-local-english-german-version/

A Licence to kill Standard English?

Local Englishes – A sort of local linguistic inbreeding?

 Are we sleepwalking into a world of incomprehensibility? Are current trends in language development a retrograde evolutionary step? Will “local language needs” develop into some 200 different Local Englishes and replace Standard English? It seems that “local needs” have already done so in as many countries as there are official languages as any objective analysis would reveal.
Why should the English Language, in its course of evolution or perhaps devolution, also need “to take account of local language needs” in all countries all over the world, leaving us with some 200 varieties of Local Englishes? Is Standard English not good enough?
Dorothy L. Sayers, the famous crime-story authoress wrote in 1936 the following about the advantages of the English language: “The birthright of the English is the richest, noblest, most flexible and sensitive language ever written or spoken since the age of Pericles. […]. The English language has a deceptive air of simplicity: so have some little frocks; but they are not the kind that any fool can run up in half an hour with a machine.
Compared with such highly inflected languages as Greek, Latin, Russian and German, English appears to present no grammatical difficulties at all; but it would be truer to say that nothing in English is easy but the accidence. It is rich, noble, flexible and sensitive because it combines an enormous vocabulary of mixed origin with a superlatively civilised and almost wholly analytical syntax. This means that we have not merely to learn a great number of words with their subtle distinctions of meaning and association, but put them together in an order determined only by a logical process of thought.”

With regard to more complex language, it is my experience that seemingly convoluted, circumlocutory, or verbose language – although it does occur – is very often a compact chain of thoughts with logically ordered ideas. Conciseness requires a different functional vocabulary and different grammatical structures and intelligible language cannot be reduced to the lowest common denominator. This would be tantamount to using ambiguous catch-alls devoid of their established dictionary meaning when precision and accuracy are called for.

A short introduction to the concept of “Local Englishes”

On the website of one of the most distinguished publishers of academic books and dictionaries, Oxford University Press, there used to be an interesting section on the development of the Englishes with an interesting prediction. It purported that the number of non-native speakers of English would soon outnumber native speakers of English with significant consequences. In the course of being assimilated by other nations and societies, “[…] English develops to take account of local language needs, giving rise not just to new vocabulary but also to new forms of grammar and pronunciation”. To compound matters, it was predicted that “At the same time, however, standardized ‘global’ English is spread by the media and the Internet.”
Unfortunately, the author or authors of this text do not specify what the elusive “local needs” may be and what their justification might be, thus leaving ample room for speculation. Besides, this poses the legitimate question if there is an essential need at all for the about 180 to 200 potential local variants or “Local Englishes”, ranging from Amhavic and Balochi English and Kyrgyz to Zulu English. And one is left to wonder if Standard English is not good enough and needs to be improved by non-native speakers.

They all look like English, they sound like English, but they are not Standard English. Being the outlandish varieties of Standard English, they are often ambiguous and frequently tend to resemble verbal puzzles. In many instances, not even native speakers understand these sorts of English. They are often unnatural, substandard, incomprehensible and so deficient that no responsible parents would ever expose their offspring to it if it were their native tongue. They are marked by artificial non-native constructs (grammar and collocations), fancy new words no one can understand, and a novel approach to pronunciation. Thus, they become an obstacle to communicating effectively in both written and spoken English. Guessing the meaning of what is being said becomes the main skill needed to communicate after a fashion.

The process of generously taking account of “local language needs” has been going on for decades. In 1982, a harsh letter to the editor by a conference interpreter was published in the International Herald Tribune. In his letter, the writer states that in his daily work he sees close-up the English language disintegrating into unintelligibility at an alarming pace. He also says that he is often asked to render an interpretation of the “English” spoken by delegates who thought that a few years` secondary school qualified them to cope with the most disarmingly subtle language in Europe. He bemoans the absence of any protest by native speakers at the gibberish he is often subjected to. The French, on the other hand, hold the exact opposite view in this respect, maintaining that language is difficult, verges constantly on treacherous ambiguity and, for that reason, requires study. Whereas the English have always given the world the impression that any fool can speak English – and any fool now does. Please note that these are conference interpreter’s words, not mine.

Is there a need “to take account of local needs” ?

Over-simplified Local Englishes with mutilated grammar, weird new words which no speaker of Standard English can understand, and a novel approach to pronunciation, are developing fast. They are confusing to even speakers of Standard English. Not only are they an obstacle to communicating effectively in written English but also in speech.

Often, one is forced to ask the speaker what he actually means if one is interested in what is being said. However, this kills a conversation and in many cases, some people just nod their approval or say “yes” in the right places while trying to guess what is being said. In doing so, one reduces a meaningful conversation to a social function where the gap between intended and interpreted meaning becomes unimportant. More often than not, this sort of English is too broad and ambiguous, leaves too much room for guessing, and asks for a high degree of patience and goodwill. In the same manner, it may put a high strain on the listener, is marked by frequent backtracking and asking for additional information in many cases. I dare say that the faster the new variants of English develop, the more acute this problem will become.

My dictionary of “Local English, German Version”, although being partly written with tongue in cheek, is the first attempt at documenting the nascent state of the hitherto hard to define and hard to pin down “Local English, German version” or “German English”. Examples used to be confined to teachers` lounges, faculty rooms and “high-security” translators` offices but thanks to the internet, the fun can now be shared uninhibitedly across the globe.

School English, Denglish, Basic Global English and Globish – all deviants of Standard English – are likely to continue to merge into one unified system of organised balderdash, with large parts of the English language as you know it, changed beyond recognition. Dominant  contributing factors are unedited documents and publications – frequently of international validity -, which are passed off as Standard English but in fact they are written by non-native speakers of English often in substandard, mutilated, and therefore difficult English. I have often wondered if translation source texts written by non-native speakers of English may not be an insult to any court if these documents would have to be submitted in the course of any legal proceedings.

Non-native-speaker-teachers — among the blind, the one-eyeds are kings?

A Local English version or the German sort of English has been around for quite some time. It is considered “incorrect” English at school and becomes perfectly acceptable once formal schooling ends. There is ample proof of this to be found in all media and on the internet and can be documented, that is, downloaded, screen-shot, video-stream-taped and printed from many sources. New coinages, bastardized and corrupted words or phrases and other hard-to-understand snippets of Local English – all due to incompetence – are often used with child-like innocence and frequently give rise to great hilarity. Preliminary findings seem to suggest that the causes for “a local need” for substandard English can be traced to too low standards and the German speakers` unwillingness to learn English up to the level that can actually be achieved. Apart from that, willing learners are discouraged to learn or train English to the level that is actually achievable because there is no real incentive to become fully professional at it since the poor status quo is considered to be the benchmark and no pecuniary rewards are offered to those striving for more.

The noddie syndrome in foreign language education

Benign and permissive teaching methods are often aimed at over-simplifying Standard English and concern themselves rather with the social function of a language than with the precise and effective conveyance of information. Previous standards of competence which used to be required of those teaching have given way to a social-worker-style pedagogy which relies on nodding vigorously in agreement to all gibberish-like verbal outpourings and studiously glossing over all kinds of substandard written material. Implicit or open acceptance of inadequate language and varnishing low standards in all areas where English is used prevail. And generally, there is a conspicuous absence of any encouragement and incentive to do work or study on one’s own. Error-swapping in any kind of group work and even a refined sort of condescending encouragement of verbal balderdash on the part of those imparting English is a major contributing factor, too.

Perhaps learners are the victims of a society pandering to those unwilling or too lazy to learn Standard English, or society has, for various reasons, tacitly consented to succumbing to widespread incompetence. One could, however, call this neglect to take corrective action aiding and abetting this unrelenting language engineering process. Many people working in education and the language business and native speaker friends as well tell me that what I am doing here in my web log must be done. However, they cannot afford to argue against this process openly because they are dependent on the status quo situation. In the meantime, they continue to suffer in silence.

Interactivity guaranteed

Could anyone take an active part in the devolution of Standard English, and in the evolution of his or her Local English version? Would it, for instance, be possible for anyone to decide that he or she has changed Standard English Grammar as it was done a few years ago in the song lyrics during a European Song Contest. The song writer could invoke the proclamation published on askoxford.com as an authority and lay claims to inventing new forms of grammar necessitated by local, or even, when asserting oneself boldly, individual “needs”, because it would be too troublesome to apply the established Standard English code of communication? Millions of spectators were silently singing along the lines:
“…just can’t wait until tonight baby for being with you”. This is only one example out of thousands.

Millions of new forms of grammatical structures and words could thus be created, but I think that these approximately 200 local deviations intercrossing with one another may turn the English language into an indefinable, unnatural, substandard, and incomprehensible mass. Alternatively, would the local variants be implemented worldwide by the mere stroke of a pen on a set day – hokum spokum nonsensicum? Probably not. They are allowed unchallenged to creep in through the backdoor. They are developing right now under our very eyes and people working in the language sector are taking the brunt, yet they are obliged to turn a blind eye.

Denglish

Does Denglish represent a linguistic evolutionary step or is it just a passing folly; a pseudo-proficiency in English or just a means of showing off one’s language incompetence? Denglish is a strange mixture of English and German words or phrases. This sort of Continental neo-pidgin English is ubiquitous and most striking when put bravely into print. English words are adapted in keeping with the rules of German grammar and mixed freely and haphazardly with German, often lending a hilarious touch to the resulting muddle. However, it is when new English words coined by Germans or misapplications of otherwise correct English are thrown in that the effect becomes utterly uproarious. And to top it all, when Germans start to invent new applications for English words or even entirely new English-sounding words which no native speaker would understand, native speaker of English are in dire need of guidance through the Continental version of their mother tongue.
Not surprisingly, many of the Denglish coinages have made it into my dictionary “Local Englishes, German version” or “German English”.

Basic Global English – The lowest common denominator?

Basic Global English or BGE is a method to facilitate learning some sort of Neo-Pidgin English. It borrows from Standard English a basic vocabulary of some 750 words to which an individual bespoke vocabulary of 250 words is added to cover a learner’s interests and hobbies he or she can use to explain their world with. A maimed and mutilated grammatical system consisting of 20 rules replaces those structures of Standard English considered unmanageable by foreign minds. It also encourages the use of a sort of sign language and is not recommended for usage with native speakers of English. The most prominent propounders and popularizers of this deviant of Standard English are non-native speakers of English. With missionary zeal and great conviction, they are keen to create an artificial sort of minimalist official global language – a kind of Simplified-Simple-Speak even simpler than Globish. And it is considered suitable to serve as a lingua franca at the highest level among politicians, business persons and other decision makers since Basic Global English also caters for the business and banker market. A business Basic Global English version is also available. However, the one advantage of BGE is that even children and adults with special needs should be able to learn this runty form of pseudo-English.

Translators suffer in silence

As mentioned briefly before, unedited documents and other kinds of publications and websites, frequently of international validity, are written by non-native speakers of English and passed off as Standard English. Translators translating from non-native speaker English texts suffer in silence since the subject of mutilated, difficult, or hard to follow English is still a taboo subject. Having been made to believe that their English is better than it is, non-native- speaker-writers of such inadequate texts have no idea what problems they are causing.

I know from a reliable source that more than 50% of source documents to be translated into German have been written by non-native speakers of English. These unnatural local varieties of English look and sound like English, but they are not Standard English. They are often ambiguous and frequently resemble verbal puzzles and are extremely time-consuming. In many cases, they are undecodable with the author being the only person to know what his “innovations” or “coinages” or conundrums are supposed to mean.

Another striking feature is the fact that specialised languages seem to be on the retreat. I have seen too many source texts in non-technical fields, also by native speakers of English, when the authors tried to “use their own words” to describe complex processes for the lack of what used to be considered indispensable knowledge and I still remember the mental pain when trying to make sense of those mostly inept and cumbersome descriptions.

I have also seen translations by non-professional “Local English speaker” when literal, word by word translations were used. Or, when writing in a foreign language, the writers thought that the mere juxtaposition of words renders comprehensible bits of texts while no native speaker would ever use such artificial and “difficult” constructions.

Machine translations creeping in on the sly?

Apart from using hard to follow, difficult, substandard and mutilated English which is in most cases devoid of accuracy and subtleties, most non-native speakers lack the language competence to distinguish between Standard English and hard-to-understand and ludicrous machine-generated “novel” English. I have seen bits of machine translated text used in academic papers which did not make any sense at all. In other papers written by members pertaining to the same linguistic “Local English” group, I came across the sweeping statement that machine translation services will reduce translation costs for governments, a service that would be used by the “young and dynamic”. I feel inclined to add: and by the incompetent and gullible.

Machine-translated websites will continue to be a contributing factor to the often grossly negligent, sometimes deliberately ignored, or even calculated corruption of the English language. These translations are sometimes not even declared as such, and more often than not, one has no option but to read them if they are, for instance, support sites. Not only are they an imposition on the reader, but also a danger to all those non-native speakers of English whose knowledge of the English language is limited. They may take these excrescences for Standard English and pick up, wittingly or unwittingly, vocabulary, grammatical constructions and “stylistic refinements” they think worthy of emulating. Responsible parents may even consider blocking machinetranslated websites on child protection software programs to shield their offspring from the adverse influence of bad language just as they may do with websites showing adult content.

Besides, the uncritical belief in the authority of blinkered specialists and blind faith in the new authority of the computer may be another crucial factor contributing to the unsuspecting acceptance of substandard English one comes up against on websites. Incidentally, Google itself seems to be aware of the problems involved in machine translation software. For instance, it asks users in their translation section, “Also, in order to improve quality, we need large amounts of bilingual text. If you have large amounts of bilingual or multilingual texts you’d like to contribute, please let us know”. Google’s approach to machine translations is likely to pay off in the long run, since trying to emulate the human brain is likely to render better-quality machine translations.

Would a test seal for texts edited by native speakers be helpful?

For about two years, I maintained a blog at Yahoo’s 360 site before Yahoo gave it up. True to fashion, I posted a kind of warning with the caption “This blog is not written in native-speaker English but in Local English, German Version (or German English). Picking up of any errors is entirely at your own risk”. Soon I gathered from feed-back that this message was in fact counterproductive in that non-native speakers of English thought I was promoting Global English or the German sort of “Local Englishes”. Nothing could have been further from the truth and I realised that I must have failed wretchedly to express the mild sarcasm intended.

 So I reckon it would not be a good idea to repeat this mistake but I have been wondering ever since whether some kind of test seal should be used to mark non-native speaker texts when they have been edited by native speakers of English. During the past 6 months, I have seen only two documents out of several hundreds that were actually marked: “Edited by + native speaker name”.

 Conclusion

 Languages have always been subject to change and have evolved naturally over time, the emphasis being on “naturally”.  However, never before has this process been artificially accelerated and manipulated by a number of factors which have largely been ignored so far.

 It is surprising that such an important development in the English language leading to a uniform system of organised balderdash goes largely unnoticed and undisputed. For the absence of a suitable term, I have taken the liberty of dubbing this process “neo-pidginicity”. Native speakers of English are probably unaware that a kind of linguistic genetic engineering is going on right now, especially on the web. If they are aware of this they may underestimate the impact this may have on Standard English, while others may even aid and abet this engineering process or condone it with their silence.

 Raising awareness – among native as well as non-native speakers of English, of how language is helped to develop by tacit acceptance of present poor standards, poor non-native speaker translations of websites and documents of international validity, and inadequate translation software, may help contain the advance of incomprehensible and ambiguous non-native speaker English. However, this may be wishful thinking. It can be assumed that the development of “local Englishes”, with its likely 180 or so local deviations and indiscriminate acceptance as separate variants in non-native English-speaking countries is allowed to persist unchallenged.

What are the alternatives then to introducing local global English variants? Although the notion of being truly competent in English is an agreeable one, there are a number of reasons why it is not possible to impose higher standards, particularly not globally; the major obstacle being the absence of any incentive to become more proficient in English since the poor status quo is considered to be the yardstick. Furthermore, this would require study – a word which seems to be, together with others like “grammar”, “study”, and “homework”, out of bounds. Taking the other stance, that is, deliberately encouraging and perpetuating the status quo and thus, premeditatedly influencing the future development of the English language in an adverse manner by using deliberately sloppy language is not a solution either and smacks too much of cynicism.

Stigmatizing substandard language seems futile, yet I have chosen to do so in the hope of raising awareness among native speakers of English, most of whom have no idea that their tongue is being tampered with by non-native speakers. Perhaps I should emphasize that this is my objective, not my “mission”.

In the meantime, a rigorous analysis of what is going on at the receiving end in the teaching process at all levels, for instance, recording and analysing classroom activities, may assist in reassessing the status quo. Less reliance on the spoken word in unnatural settings, when people learn English, may help too. It follows that it is best that we continue to abide by the role model standards set by native speakers of English.

Self-help

I wonder why people always want the best software for their computers while they upload, more often than not, substandard learning techniques into their brains

Never before has it been easier to learn a language up to a “true” near-native level. One is left to wonder why, in the age of multi-media, easy access to reasonably-priced self-teaching textbooks and English courses, an array of online dictionaries and hardcopy or CD versions, and a few tried and tested comprehensive grammar books with many exercises, the “Local Englishes” in general, and German English in particular, are as outlandish as they are. What follows now may seem like oversimplified and often heard pieces of advice of yesteryear. Nevertheless, the fact that the principles underlying them have been known for hundreds of years speaks for itself.

Online-hardcover dictionaries and grammar exercise books

One of the most expert and prolific text book authors for Spanish text books, Wolfgang Halm, said in one of his books that for those wanting to acquire a comprehensive knowledge [of any given foreign language], there cannot be enough exercises. This statement coincides with my experience. The kind of mental gymnastics difficult exercises provide is essential to improving one’s language capabilities in all aspects. Contextual vocabulary work is crucial to acquiring a large functional vocabulary. Use as many dictionaries as possible. All too often, entries differ widely. There are more than fifteen monolingual and bilingual online dictionaries available that are based in the UK. Some people prefer hard-cover dictionaries including monolingual dictionaries apart from online versions. One-click translations are a poor substitute and work only with very easy texts. Those interested may want to go treasure hunting for 50 year old grammar bestsellers that offer many more exercises than those currently used in Germany. Interestingly, they were all published about 1960 — 20 years before the communicative teaching method came into full swing with its devastating impact on standards. Surprisingly, they are still available and if your local bookstore does not store them, try amazon.co.uk or amazon.de.

Using internet search engines for homework, essay-writing and more

Research with internet search engines has a great, hitherto untapped potential. You can edit any kind of text, check collocations, do contextual vocabulary work, get rid of pet peeves by copying and pasting into a word processing document as many examples as you need. Even grammatical constructions can be checked, exercises can be compiled. There is nothing better to get a good grasp of the language – much better than swapping errors in group discussions with your fellow students. You can do as many revisions as you like, adding ever more examples and word definitions you come across. Dictionary entries can be copied and pasted as well, even from those dictionaries installed on your computer.

And for good measure, you can fine-tune your techniques by preparing (again by copying and pasting) those fragments you want a voice reading software to read to you, even on your mp3 player. The free-ware software Balabolka is a good start to check out this kind of software. A better-quality software “Voice Reader Home” is available at

http://www.linguatec.net/onlineservices/voice_reader/

and costs € 50.00.

You will, however,  need to get used to working with “meaningful” fragments. This depends on your general knowledge and language ability. Bear in mind that the quality of sources is, initially at least, important. The URLs shown in the list of search results usually give you some idea.

Use the advanced search function because you would ideally need the domain box when you want results only from native speaker domains, the major domains being uk, ie, nz, au, ca, us. Yahoo offers you the option to search for more than one domain at a time.  Using domains will save you a lot of work going through documents written by non-native speakers whose documents are published on native-speaker- domains. If you prefer documents that are at least edited by native speakers of English, you need to open search results, especially on edu and uni domains, where foreign students publish their documents. As to the search technique, always use the box “the exact phrase” (yahoo) or “the exact wording or phrase”.

Shifting or switching round words, omitting or adding words, using the wild card may help you find what you are looking for. If this does not help, try changing the domains or search the entire internet by leaving the domain box blank. Then you will get domains like net, com etc which do, however, not show you whether they are from a native speaker domain. In this case, you would need to open the search results you find interesting to find out, if the document is on a native speaker domain. Should the need arise that you have to defend yourself against the accusation that you are biased against non-native speaker English (Local Englishes), you may wish to use my stock reply: “Without best input, poor output”.

Reading is not the fashionable thing to do but without reading texts that are not “easy” there will be no mastery of any foreign language. Avoid easy readers or magazines written in simplified or Germanised English.

Do not be afraid of specialist vocabulary. After systematically going through the first 100 pages of any given specialist book, you will have covered a lot of ground and then work will progress much faster.

About this posting
This posting is the last of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?
What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.
And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Available now:

A Dictionary of GERMAN ENGLISH or LOCAL ENGLISHES, German version

https://sanchopansa.wordpress.com/a-dictionary-of-local-english-german-version/

Why native speakers of English are in dire need of guidance through the Local English (German Version) of their mother tongue

Basically Debased? Language Simplification in Action

How do Basic Global English, Globish, machine translations and other contributory factors to neo-pidginicity compare?

Basic Global English – A runty, genetically modified language?
When describing his Basic Global English, Herr Dr. Joachim Grzega sweepingly claims “that English words and phrases do and must differ from Standard English when English is used in intercultural situations.” Arguing from his non-native speaker position, Herr Grzega thinks that “we need a new concept of English as a foreign language. Several analyses of non-native/non-native discourse have shown that non-native forms are actually sometimes quite intelligible and do not impede communicative success, while other
non-native forms may cause communicative breakdown.” However, Herr Grzega regrettably fails to look into other causes for said communicative breakdowns, such as lax and low standards and benign teaching methods, pandering to learners who may be unwilling to improve their communicative skills.

Talking about his Basic Global English in action, Herr Grzega says: “In fact, there were only problems when a native speaker was present, as their nuances, metaphors, humorous asides and double entendres confused the non-native speakers,” says. Although it can be seen that these are not my words, I hasten to add that I do not support any notions of discrimination or even apartheid – English native speakers should not be excluded from discussions held by any group in whatever sort of English. But this is not all. Because “metaphorical expressions are often problematic, speakers, including native speakers, are advised to abstain from using them”, asserts Herr Grzega. Furthermore, he wonders how helpful expressions are “that cannot be interpreted word-for-word in lingua-franca communication.” Instead, apart from those considerations just mentioned, his advice to native speakers of English is: “use standard speech or general colloquial speech. Speak slowly and distinctly. Your sentences should not be too complex. You may support your utterance with body language… […] but without switching into foreigner talk”. All limitations described here also apply to Globish-Speak, Basic Global English’s older, yet stunted business-speak brother.

Last but not least: “Don’t make unexplained utterances that require insider knowledge”. Now then, if English native speakers should wish to acquire these apparently esoteric communication skills like body language [I think Mr Grzega means gesticulating, pantomime, and grimaces] and grimaces as a prerequisite to successful communication in Basic Global English and Globish, this would be no small feat.

When one compares the level of his Basic Global English with the quality of translations made by translation software, we find a common characteristic. Texts suitable for 10 to 12 year olds, which is about the level of Basic Global English and also Globish speakers, are easier for machines to translate. Does this mean that when we talk in Basic Global English or Globish that we use bot-like, factual and neutral words or catch-alls which have been emptied of dictionary meaning so that they might fit any experience the speaker would not take the trouble to define? Yes and no. Herr Grzega suggests that we also use body language to make up for the paucity of our Basic Global English diction. There is a lot of educational material available online, millions of pictures with a great variety of body signals and telling grimaces. And why not use my “Smiley-Speak”, which I was so facetious as to suggest in a blog not long ago to replace Basic Global English and Globish with. https://sanchopansa.wordpress.com/2009/07/22/will-smiley-speak-soon-be-all-the-rage/

Machine Translations
Being biased towards AI in speech and translation programs, I will, for the time being, not delve too deeply into machine translation software and confine myself to summarizing the most salient points:

• Machine translations are still in their infancy and just like children, they deserve our indulgence. They are bound to become better and better over time.

• There are different approaches to machine processing of written human language. I favour Google’s method because – be tolerant with my oversimplification – they try to imitate what is going on in a human’s mind by using as many human translations as possible as sample translations for their database together with other methods. In the long run, this combination of methods is likely to “yield” translations which are indistinguishable from human translations. By that time, they will also be able transfer connotational meaning, which is, according to its propagator, a major deficiency in Basic Global English Speak.

• As to the future development of the quality of machine translations, AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings. Moreover, I think that together with Google’s approach, this will make machine translations second to none over time.

Way back in 1983 I had the opportunity to test a translation software program at Hanover Fair, the world’s biggest industrial fair. The hype machine was in full swing and the software company boasted itself on having George Orwell’s “1984” translated by one of the first commercial translation programs. The samples distributed were impressive, however, they did not show the corrections made by human translators during the editing process. A proud and patronizing assistant asked me for a sentence I wanted to be translated.
“AI steckt noch in den Kinderschuhen”, said I rather self-confidently. “AI is still in its infancy“, would have been the correct translation. This is what I got: “AI is still in its child’s shoes”.
Having a probing mind, I was curious to find out how programming AI software has progressed in the past 27 years. I chose one of the many translation programs at random and had it translate that very same sentence again. This is what it came up with in 2009: “Ai still is in the child’s shoes.”

Microsoft’s grammar correction feature included in MS Word:
A contribution to changing natural speech patterns?

Generally speaking, any tools which may help to make life easier are a welcome relief from tedious work. One of these functions is the Microsoft Grammar Correction feature, which can be activated in addition to the spell checker (setting-option: Grammar & Style). I use this program because of my interest in AI or artificial intelligence, although it can sometimes be fun, too. In “Are you cross with me?” the program insists on “Are you crossing with me?” I am not sure if this is a machine’s way of asking me, “Are you on my side when it comes to crossing the difficult bridge from human to machine translations?” Well, that would be far too early to say but I am prepared to watch the progress of AI software with the open, yet critical mind of a discriminating end-user.

In order to make suggestions, and likewise, translate text into another language, a program needs to “understand” and “interpret” human language. All too often, appropriate and established usage of the English language seem, when put into programming rules, difficult and therefore hard to “understand” for machines. Naturally, the question arises whether it is legitimate to simplify any given language to accommodate the needs of hitherto imperfect interpreting and translation software.

Some suggestions made by this apparently smart grammar correction tool, however, are in direct conflict with long-established, naturally evolved language structures or patterns (rules). The novel rules suggested often seem to be a simplification of language and some suggestions to edit one’s text according to MS Word recommendations are rather striking in that they may change the very nature of the English language over time. The most non-natural rules are those concerning defining and non-defining relative clauses. These are difficult to master by most foreign language students – and in all probability by translation software, too. Whenever you write a sentence with “which” when it has the function of a defining relative clause, this message pops up:

„That or Which“
If the marked group of words is essential to the meaning of your sentence, use “that” to introduce the group of words. Do not use a comma.
If these words are not essential to the meaning, use “which” and separate the words with a comma.
• Instead of: Did you learn the dance, that is from Guatamala?
• Consider: Did you learn the dance, which is from Guatamala?
• Or consider: Did you learn the dance that is from Guatamala?

• Instead of: We want to buy the photo which Harry took.
• Consider: We want to buy the photo, which Harry took.
• Or consider: We want to buy the photo that Harry took

Both clauses, “We want to buy the photo which Harry took” and We want to buy the photo that Harry took imply that there are other photos for sale taken by other people than Harry. The two sentences are perfectly correct and limit our choice while the recommendation “We want to buy the photo, which Harry took” means that there is only Harry’s photo for sale; its essential meaning is not changed when the omit the non-defining relative clause.

There is also a recommendation as to the use of the passive voice. According to MS Word’s Grammar Correction feature, sentences written in the passive voice are to be rewritten into active ones without exception. “Passive Voice (Consider revising)” is the message popping up.

Both defining relative clauses with which and the passive voice are intrinsic parts of the English language. Their usage has grown naturally over centuries. It is almost impossible to do without these structures as a “truly advanced” non-native speaker of English, as they are among the most frequently used structures used by English native speakers. Only recently, I read a book on politics, written by two Englishmen and published in 2005. I was not surprised to find defining which clauses still alive and kicking while in academic papers in the “German Chapter of Local English” that clauses are generally used instead.

Regarding the “emphasizing –self” pronoun and reflexive verbs, MS Word Correction Program often changes them to normal personal pronouns without -self or –selves, blindly oblivious to the grammatical differences in meaning they have. “Emphasizing –self -/selves” pronouns are always strongly stressed and they are used for the sake of emphasis; generally to point out a contrast such as:
You yourself (i.e. “you and not anyone else”) told me the story.
compared with:
You told me the story.
If humans uttering a sentence like this think it important to lay emphasis on the doer of an action, why should machines not keep the original idea when “interpreting” and “correcting” language?

For the time being, it is beyond the grasp or analytic or interpretational power of machines, be they grammar correction software or translation machines, to consistently distinguish between such shades of meaning let alone connotational meaning. However, machine translations will become better in time, if not human-like, while Herr Grzega and other non-native speakers of English are making every endeavour to simplify the English language into a mutilated, indistinguishable and incomprehensible mass. The same holds true for Jean-Paul Nerriere`s Globish-Speak. He is author of the book “Don’t Speak English – Parlez Globish”. In theory, both sorts of genetically modified corruptions of Englishes are not Pidgin Englishes, so the theory goes. But as it is often the case, theory and practise differ substantially in the day to day interaction among interlocutors. Both, Basic Global English and Globish are runty forms of Standard English; two kinds of immature-speak full of words and passages which are frequently hard to understand. Just like in present-day machine translations, connotational meaning cannot be conveyed in Basic Global English and Globish, while the latter have the advantage of integrating body language and grimaces into their semantic structures.

Herr Grzega considers his Basic Global English to be a minimum requirement of linguistic skills for “global peace and global economic growth”, and if I may ask: global brainpower as well? However, in his noble attempt at promoting global and cross-cultural language competence in an “atmosphere of trust, tolerance, empathy and efficiency so that information can flow without obstacles”, he seems sublimely unaware that it is his runty form of English, his mutilated Basic Global English with a paltry vocabulary of 1000 words and grammar reduced to a measly 20 rules which constitutes this very obstacle.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

All Fun and Games? The Fun Factory in Foreign Language Education

A giant playground for giant kids?

In an age, where financial wizards, bankers and business persons are called “players” or even “global players”, top manager or market-leading companies “key-players”, and an almost bankrupt company “is suddenly back in the game”, one is inclined to speculate about the origin of these voguish words. The latest coinages are “theatre” to describe the battlefields in Afghanistan, and “decompression time” – just like after a pleasurable dive in some exotic place – to explain the short time soldiers spent between a season (in keeping with the idea of leisure time) in a “theatre” to chill out before returning to their home countries.

What may be the causes of these ever-present and verifiable symptoms? Playing computer games indiscriminately may be one. Excessive game playing in language education and often, as a result of this, a lack of seriousness may be another. But in what way may other educational tools such as computer software, which all too often appears to be still in its beta-stage, with error messages popping up most of the time, generally contribute to fostering a rather lax attitude? In what way does this affect the pliable minds of the young when they grow up with imperfect hard and software? Do these mistakes, errors, flaws, faults or whatever we may chooses to call them, take on a different meaning and we come regard them as natural, unavoidable occurrences? And how does an all too easy-going attitude generally impair our ability to predict, analyse and pre-empt problems? How do games in language teaching mould the characters of learners or students? How does a generation fare when it has grown up with computer games and lots of “gaming experience” in and outside the classroom when they enter the job market?

Developmental and educational games in foreign language education
How effective are they? That would obviously depend on the sort of questions one is prepared to ask. My criteria have not changed over the years:
How much time is spent on playing games? What do I get out of them in terms of quantity and quality? How many contextual phrases and other meaningful, contextual fragments, and synonymous expressions etc. have become part of my active vocabulary? And, if no hand-outs are provided, have I had a chance to copy down those words, phrases or fragments or even entire interesting sentences for future reference to work with at home in my own time instead of relying solely on the elusive spoken word in class?

One of the most useless games in language learning I have ever taken part in was about 25 years ago. What was the point of cutting up a newspaper article, distributing the clippings to the students, making them read out the snippets and having them put the newspaper article in its original sequence? Not one single new word was discussed and not one single definition was given from this difficult and otherwise suitable article from the London Times. And what bearing has this sort of exercise on the acquisition of a foreign language, of what goes on in our minds when we want to increase our vocabulary? I would expect to find this sort of game in an assessment centre to test participants for characteristics like leadership qualities or their ability to fall into line in a hierarchical set-up but not in a language class. Incidentally, none of the participants complained about this novel idea of doing vocabulary work and I am not sure how many were aware of this and preferred to suffer in silence.

Another high-light was when a native speaker of English handed out about 15 idioms, in this case pertaining to one group – duly cut up –, asking the class to match the definitions with the idioms. No hand-outs were given to us and I had to hurry to copy down those three idioms I did not know. What a waste of precious 45 minutes! I almost forgot to mention the fact that students were supposed to discuss their viewpoints among themselves with the “supervisor”, or rather holiday camp animator, exercising utmost restraint all the time. I was under the impression the supervisor was having a good time in abiding by the rules of a theoretical model which, together with peer editing, group discussions and any other forms of error swapping, fosters a kind of “local linguistic inbreeding” and deludes learners into thinking that they are learning Standard English.

In order to give students an opportunity to pass away the time in the class room, text book publishers changed the format of many text books and made them unnecessarily larger, catering for a need learners did not know they had: full-colour editions of text-books with lots of empty space to scribble onto. Although the latter can be fun, too, especially when you are the ambitious type and design your own Rorschach tests. I guess that the print normally used in, for instance pocket-book sized text books, would make them balk at reading altogether, or in other words, it would remind them of „serious work or study”, which seems not the fashionable thing to do and above all, do not hold any promises of “fun”. It is no surprise to see these books with cartoons in adult education and I wonder, how many trees could have been saved in the past 30 years.

One result of “creative” games may be that games help to make anything which is uttered “ingrained”. Yet, little control of “quality input” is exercised due to the nature of games. Evermore games are invented as if the novelty factor were the decisive criterion. One sometimes gets the idea that there is a never-ending competition of inventing ever new games going on among educators while the number of games really useful have long been exhausted. A native-speaker friend of mine who had worked as a teacher of English in Hannover for several years told me that weekend seminars for teachers were held on the North Sea coast for the sole purpose of learning new games for use in the classroom. It must have been great fun for the participants, adult-sized-kids as it were. One has to concede, however, that useful games may have their place in pre-school education.

Only recently, an acquaintance of mine who has no formal teacher’s training told me that he had volunteered to host a discussion group for migrants. The fun-factor was important, he had been given to understand. And the most important thing was to just make the participants talk without talking too much himself, he told me with a smile of resignation. He had been admonished not to interfere in the free flow of ideas exchanged among the participants only to find that his charges conversed in a mutilated, difficult, hard to follow and often incomprehensible Pidgin sort of German. As a result, he sat there all the while, reluctantly nodding in agreement the gibberish emitted from eager, yet incompetent mouths. They did not know otherwise.

No wonder that he threw in the towel out of frustration after about four weeks. It was simply beyond him why it was perfectly acceptable to subject learners to bad language, to bad model-sentences, to bad snatches of speech, to bad pronunciation, to bad collocation, to very bad grammar and to an extremely poor vocabulary and style. In fact, so bad, no parents would subject their children to it, if there native-tongue were concerned. And he concluded that, up to a certain level, one would probably find this in almost any classroom you might care to visit. I hasten to add that “bad” is used here, of course, in a sense of “a strain on the interlocutors, hard to follow, difficult or even impossible to understand.”

As we have seen, playing games and other modern methods can be fun for the learner and be the source of great hilarity to the critical observer. It would be remiss of me not to mention one incident when native-speaker text book authors wanted to have some fun too. On a work-sheet containing idioms and colloquial expressions to be imparted to eager students, wanting to learn idiomatic or natural English, it said with great pedagogic conviction: “You may sound odd if you use them”. Printed by the publisher, mind you, not a hand-written note by some disillusioned teacher. Not a word of criticism was heard at such balderdash. I, however, presumed to disagree, suggesting that it was not a very encouraging remark to put on worksheets to be distributed to students of English, especially not since the copy was taken from a text book published in England.

You would expect this sort of comment in support material for Basic Global English, which is, according to its inventor Dr. Joachim Grzega, not suitable for communication with native speakers of English. Generally, learners think that UK and US English is taught here in Germany throughout and many pupils and students would be very disappointed to learn if “Basic Global English” and its somewhat older relation “Globish” were introduced on the sly through the back door.

“Use your own words”, said in a minatory voice, as if it were an offence to use newly acquired vocabulary is another rule straight from “The Book”. Using ones` own words must be more fun, I concluded, because of the implicit “seriousness” (equals absence of fun) inherent in building up a large diction. By implication this rather arrogant instruction means: don’t take the trouble to employ those words you might have just learned, if I had not prevented it, that is, do not enlarge your vocabulary, do not increase your power of thinking.” It is common knowledge that every single word is a tool to do your thinking with, the more tools you have at your disposal the more powerful your thinking will become. Conversely, reducing and limiting one’s vocabulary would be a retrograde evolutionary step.

The following example is about a foreigner who made other peoples` words his own and who did seem to get a certain degree of fun out of it. When I was about 14 years old, I met one of the so-called “guest-workers”. He was Italian and must have been about 40 years old. Apart from his open-minded relations with Germans, which was very unusual at the time, I was struck by his excellent German. He spoke with great precision, had a large vocabulary, impeccable grammar (hold your horses, I know what you are thinking) – that is, qualities contributing to clarity. In the course of our talk, he pulled a notepad and pen out of his pocket and asked me about the meaning and spelling of a word I had just used. He then wrote the word into his notepad with great precision and care. Oddly enough, it did seem like “fun” to him and I asked him, what else he did to improve his excellent German. “It’s great fun listening to the radio. I like reading newspapers as well, not the tabloids, though”, he told me with great conviction.

As to the taboo word grammar, I once met a German who was a very fluent, a fast talker with a large vocabulary. All the while he was churning out his words, he seemed to have great fun. But not those interlocutors of his who took an interest in what he was saying and did not just nod him off in the right places without understanding much. My complaint may not be politically correct but listening to him was a terrible strain because he made so many grammatical mistakes that they were actually an obstacle to comprehending what he was trying to say. According to the doctrines of the modern pedagogy, he must have been a one-off because “The book” says that with time and practice, mistakes will disappear. With him, they had become ingrained – a fact that is frequently overlooked. Now I dare ask a bold question: if you say something grammatical wrong over and over again, how can it ever become right?

To most questions posed at the outset of this post, I can offer no answers. And those I offer, tentative as they may be, probably fall short of general approval. The moderate use of games in the classroom can be useful, especially as a break from long hours of learning. However, in most cases games are time-consuming and yield little measurable results. As to the problem of how a game-playing “culture” may affect society on a wider scale in terms of its brainpower and economic performance, ex-chancellor Kohl put the dilemma very succinctly about fifteen years ago:
“Germany is a huge amusement park”.
One is inclined to add now: operated by professional teenagers.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art.

The Taboo-Side of Group Work in Foreign Language Teaching

Group work at universities: a peremptory demand by industry and commerce

In 1982, when I was a guest reader at Hannover University, I was struck by the so-called “group work”. The way it was done did not make any sense to me. The answer to this riddle I got straight from the horse’s mouth, when I asked a native-speaker lecturer about it. She had been a witness of the times, saying that industry and commerce had asked all universities to implement group work in order to better prepare students for their future jobs. The curricula in all disciplines were obligingly changed in a rush and no-one has ever bothered to look at the way group work is actually done.

Putting the cat among the pigeons, I will start my discussion with a rather contradictory sentence I found a few years ago in the classified ad section of a broad sheet newspaper:

“Employees of the sales department have the opportunity to prove themselves on their own in a highly competitive market while aware that success can only be achieved as a team.”

I must confess I am guilty of palliating the rather harsh sounding German original, which is:
“As employee of … you are an individual fighter in a highly competitive market…”

This statement reflects the whole fundamental dilemma of “group work” as it has been practiced in preparation for occupational and professional careers all over the world in schools and universities for more than 30 years. However, in the mid-seventies – shortly after the implementation of group work in schools and universities, a devastating study was published in the “Harvard Business Review” by two scientists, purporting that in the business world or in management starting at supervisory level, hardly any work was done in groups in the way it was practiced in schools and universities.

The most striking difference between theory and practice is that at schools and universities, groups are peer groups, that is, group members forming a group are all of the same rank, trying to solve problems in a “democratic” way. Interminable discussions often go on in round-about ways about incidental issues. In foreign language classes, error-swapping becomes the most notable of all activities and, together with peer editing, group discussions and game playing contributes to a phenomenon I have dubbed “local linguistic inbreeding”. In task-oriented group work,  none of the group members has the formal authority to assign tasks, follow up on them, discipline laggards, and evaluate the individual group member’s contribution to the project work. The role of the professor or teacher is in most cases reduced to handing out the task to the group and making sure that only the target language is spoken throughout with some of these teachers engaging in the latter with a kind of pathetic rigour. The group will then divide the task and sort out the details among themselves without further follow-ups or close supervision by the professor or teacher, giving spongers ample opportunities to reap the benefit of the groups` achievement without ever doing a single stroke of work at home. Conversely, in class, some basic grasp of core issues and key phrases enable them to pass themselves off as diligent and expert students on the subject without the professor ever noticing.

Group work is often used in assessing leadership potential of participants or their ability to get along with people in assessment centres where it has its justified due place. However, in the day to day operation of companies, subordination rather than creative contributions is more likely to be expected from the individual group members or staff.

Apart from this, in the real world, there is always a manager or department head in charge whenever he or she calls a meeting for any purpose. This superior head of department has the formal authority to give direction if not plain orders. Subordinate group members, to use this term for the sake of comparison, have a predefined area of work for which they have been hired and are solely responsible for. Tasks are assigned by unconcealed instructions by the group member of the highest rank and not by discussing endlessly who is going to do what and how it should be done. Every single subordinate group member has to rely on him- or herself when it comes to accomplishing a particular task with the exception of the odd non-committal peer consulting.

Following-up on assigned tasks, controlling, and final evaluation of the individual group member’s contribution is done by the manager by virtue of his formal authority he has been invested with. He ensures that only the individual is finally held responsible for the quality of any work assigned to him or her rather than punish the group summarily for the poor performance of one or two underachievers. Conversely, high achievers do not need to share their success with those who do not merit it.

I think that it was rather the communication skills managerial and supervisory staff need in meetings which had been mistakenly dubbed “group-work” way back in the seventies. To me it is not surprising that no one ever seems to have questioned “procedures”. Maybe it is to do with the “authority-syndrome” and I cannot help thinking of Dr. Fox, the actor who had been hired to deliver speeches which did not make any sense without anyone noticing.

Today, you find group work in both schools and university across the board in all disciplines. Not surprisingly, also in foreign languages. It is the mainstay of the Communicative Teaching Method or CTM is group work and plays an important part in Dr. Joachim Grzega`s artificial and mutilated invention, namely his Basic Global English method. For the past three decades, almost everything is done in groups, be that discussions, joint writing of texts of any kind or communal text appreciation; cooperative poetry analysis or collective grammar and vocabulary exercises. What the heck have group-work and the ensuing discussions with non-native speakers of English got to do with the acquisition of a foreign language when there are much better role models and methods around?

There are other downsides to group work, as well. For instance, when doing work in a group that requires your utmost attention, you may get distracted by frequent and often superfluous interruptions or overbearing interferences. Or oneself feels obliged to consult the other members of the group for the sake of asking a question or the opinion of the others just because this is what is expected of you in group work. My observations made over many years seem to confirm my assumptions.

In order to illustrate my arguments, I can give a few examples from the vast storehouse of my experience. The daughter of an acquaintance of mine had been a straight “A” student for four consecutive terms. When she could no longer put off attending university courses which required her to do group work, she was alarmed, fearing that her good performance might be tarnished by a mediocre “B” grade the group might receive for the plain fact that spongers, less interested and also average students might bring down her impressive performance. Fortunately, it did not turn out that way in this instance, but the following example is a convincing case in which a teacher was unable to judge the individual group members` performances because most preparatory work had to be done at home.

I remember a case when a professor made some deprecating remarks about one of our group members. She wrongly assumed that the student in question had hardly done any work while in fact the student was the key player in that she had prepared all the nitty-gritty work of research and summarized her findings expertly and handed them to two spongers on a platter. If I had not had the chance to point out that it was in fact that very student who contributed to the group’s success the most, the student’s final evaluation would most likely have been downgraded because of this mistaken perception on the part of the teacher. Go-to guys sometimes remain behind the scenes out of modesty, as in this case, which can lead to devastating assessments by professors and teachers. Oral activity in the classroom is not necessarily an indicator of a group member’s true contribution to the work of the entire group.

Another example is the case of a group of about ten people, which had been formed impromptu during a literature class at university. One student was to hold a short lecture on the outcome of our discussion on a set of questions, which proved to be a sort of “mission impossible”. Nevertheless, no-one realized this. The student in question was to take minutes of the individual contributions to the group discussion while guiding the group through a set of questions. Then, she was to analyse her minutes on her own for a couple of minutes before presenting the consensus ideas to the whole class. No one realized that she presented her own ideas prepared in some detail at home against said set of questions.

During the group discussion, she would steam-roll across all ideas which deviated from those she had prepared. After a couple of minutes of refreshing her memory, she read her notes off three closely written pages – those notes she had entirely prepared at home. None of the arguments put forward during the group discussion by participants – some of these differed widely from hers – was mentioned in her oral summary to the entire class. Since ideas on literature tend to be highly subjective, no-one actually realized that the excellent lecture she delivered was solely her work, her very own analysis. If someone else noticed what she had actually done, he or she kept quiet with the professor beaming as she had gone by “The Book” and “made the group talk”. You cannot blame the student who delivered the lecture, resolute as it was. If your marks or evaluation were dependent on this “sort” of group work, would you have done otherwise?

Peer editing is another common group activity although in this case it is done on a smaller scale, namely in pairs. There is no denying that your non-native editor may spot the odd awkward passage or the odd mistake. But in most cases, he or she may not be very helpful in remedying fundamental short-comings. Rather, error swapping among the non-native participants is more common. If the students are lucky, a native speaker teacher checks and prepares the text so that it can be saved on your “brain-hard-disk-drive”.

Generally speaking, there is a dangerous implication in the failure to recommend native-speaker editing in that language students are made to believe that their English is a native-speaker-like UK or US English and does not need editing.

To revert to the two distinguished scientists who had established that group work did not actually happen in companies in the same way it was done in the classroom: their finding did not surprise me at all. It coincided with my experience that the only time two or more heads of department ever worked together on a joint project on equal footing was when it came to organising the annual Christmas binge party.

Come to think of it: What might Sigmund Freud have said about the educators` preoccupation with “group work”? I would not be surprised if he had diagnosed “group work” as a sort of sublimation, the sort you would think he might be interested in.

There is no trick to being a satirist when you have so many people working for you.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art.

Fastererer, bettererer, or wrongererer?

Advertising Copywriters and other Role Models in Language Teaching and Training

Beware: Satire

The other day, I noticed a new advertising campaign by a German airline. Linguistically genetically engineered, or rather maimed adjectives like the ones in the blog title were paraded on posters, which were strategically placed in Hanover’s underground stations and along major roads. Each poster featured a different bastardized, distorted and degenerated adjective in German which was positioned on top of a photograph of a smiling air hostess. I took the liberty of adapting them when I wrote this spoof and needed to come up with their equivalents in English. However, I managed to retain the intended “play” on the last syllables.

On the following day, about at the same time, I noticed a small group of migrants. This is what we call the people pertaining to this group because we do not have immigrants in Germany. They had assembled before one of these posters and were looking reverently upwards at this novel aberration of our language.

Then, the most astonishing thing happened. One of the girls stepped forward, turned around towards her group and began to recite rhythmically the subject matter of their admiration, clapping her hands as if to underline each syllable. All the time, she moved about like a cheerleader, while enunciating each syllable of the adjective of their choice with great precision and no small degree of enthusiasm. WRONG-ER-ER-ER, WRONG-ER-ER-ER, WRONG-ER-ER-ER, the group chimed in, chanting ecstatically with enraptured eyes.

The entire situation somehow reminded me of elocution classes pegged down a few notches. And at the same time, I was thinking of Dr. Joachim Grzega`s novel and daring concept of Basic Global English. Mr Grzega is a German linguist with a mission in that he has, like his copywriting advertising colleagues, maimed and mutilated a language, though in his case it is Standard English – a language which is not his native tongue. His novel and courageous invention is a Simplified-Simple-Speak version of the “Local English, German Version or German Chapter”.

What is most admirable is the fact that Dr. Grzega has boldly reduced Standard English grammar to 20 rules. He did not even bother to ask native speakers of English – the rightful owners of Standard English – whether a non-native speaker of English should be allowed to tamper with one of the most subtle languages of Europe, which has proven to be a tried and tested code of communication. His Basic Global English consists of a vocabulary of some 750 word and a bonus bespoke vocabulary of 250 words, tailored to the individual needs of his pupils. Dr. Grzega reckons that this Simplified-Simple-Speak vocabulary is good enough to explain our present day complex world with. And the perhaps teeny-weeny individual worlds of his learners with the help of the tailor-made 250-word bonus vocabulary. The only good news is that the adjectives used in the caption of this spoof are on his pitiful vocabulary list of 750 words which “BGE speakers” should know. Consequently, learners of Basic Global English and Basic Global Business English will at least be able to read the caption of this blog.

To revert to my small group of enthusiasts, I could not find fault with all this. They were just acting true to fashion. To them, it was good or correct German worthy of becoming part of their active vocabulary. After six months of intensive hypnopaedia – I am being awfully sarcastic here – in a free language course which migrants of German descent receive upon arrival in Germany and after passing a language test prior to being given a visa, many of these migrants can barely bid you good morning in German after completion of said course. A Polish colleague told me ten years ago that at most language schools, learners are advised not to study the language of their new home country but pick up the language “naturally”. This holds also true for present-day languages classes for migrants. If they come across a new word they do not know, they are advised not to consult dictionaries. With time, they will know, they are told. Many migrants I have asked ever since have confirmed this. By implication it means: don’t take the trouble to learn new words; do not enlarge your vocabulary; do not increase your power of thinking.

You are also likely to encounter the all too complacent advice by teachers “Do not worry if you don’t understand a word”, the implication being to take it easy, not to bother to look up words in a dictionary or even dictionaries, not to study how words and phrases operate in their contexts. Try telling that to your young children when they ask you about the meaning of a word when they begin to learn their native tongue. No one would ever dream of doing this. Why is it done when it comes to teaching a foreign language? Has nobody ever realized how incongruous or indeed absurd this instruction is?

In foreign language teaching you may even come across a native speaker of English giving learners expert advice like, for instance, not to bother looking up words but to pick up the language “naturally” without their ever having learnt a foreign language themselves. Likewise, they may never bother to teach words and phrases in a context. “Teaching English in contexts is considered time consuming in pedagogy”, a young teacher-student said in a lecture only two years ago. That is why you still find learners learning single words by heart, perhaps with one sentence to illustrate their meaning. It must be the same method used by Neolithic men when they were building up their vocabulary by looking at cave paintings and learning the animals` names by rote. But one needs more sentences or parts of them to anchor words in the memory, become more fluent and acquire a good grasp of any given language.

As a result, we have migrants living here “abroad” for a decade or two and they still do not know any German at all. Others have a working vocabulary of about 700 words or so and talking to them is a strain on anyone who takes a genuine interest in what is being said. If not and you just nod them off out of politeness, you are behaving “politically correct” but without the slightest genuine exchange of ideas. Is it any wonder that migrants are not fully integrated in our society?

You may even find many migrants who have lived here for 10 or even 20 years and who can barely speak the language but are, nevertheless in a state of blissful incompetence, believing that they do speak the language. Despite the fact that many of them have German colleagues at work and German friends, they have not improved their German one little bit. Watching TV regularly, but passively seems no solution either for most of them. I think it is high time that the fairy tale that one learns a language best by living in the country were put under close scrutiny. This myth will then probably be in tatters: living in a foreign country without getting involved in the language in one way or other does not guarantee success. People who can pick up a language only by ear and do not even need a native speaker spouse or friend they can consult on grammar and the meaning of words are as rare as chess-grandmasters, but the latter have to work very hard and train every day for hours on end in addition to being extremely talented.

Basically, living in the country of one’s choice, together with a multitude of learning aids, provides ample opportunities to learn a language. It would be interesting to find out why most people, migrants and learners of a foreign language alike, manage to communicate only at the “threshold level”. It is small wonder that too many migrants have difficulty in entering the “house” or, in other words, become integrated in society. Very little is known about how “very good” students learn a foreign language. Some of those who managed to acquire good language skills, be it when they lived abroad or in their country of origin, do not talk about it, but from my experience and modest “research”, I can say that most truly advanced students did, in one way or the other, work for it.

Almost 25 years ago there was a passage in a brochure issued by the either Ministry of Education or the Ministry of Science and Technology, I cannot recall exactly which, about prize-winning pupils in the annual foreign languages competition. It innocently stated that nothing at all was known about how those pupils learned their foreign languages, only one thing was certain: not in school. Looking at the latest PISA results, we may deduce that for some reason or other this state of “ignorance” has been perpetuated.

Way back in 1982, I took part in a conversation in which a friend of a friend claimed that he knew someone at a Regional Government Office whose job it was to convert A-Level results to those of 1953 and that a straight “A” achieved in 1982 was a humble “C” in 1953. Not surprisingly, standards have deteriorated further in the intervening 27 years and it would be interesting to know, what an “A” achieved in 2009 is really worth these days. Can one not infer that the poor PISA results German pupils scored did not come as a surprise to the authorities and why corrective action was not taken in time? In my opinion, it was easier to acquiesce in the situation brought about by the unconstrained application of the doctrines of a liberal and permissive society, rather than try to apply higher standards against the opposition of minority groups.

To answer my question in the caption of this satire, whether admen, or more politically correct, adpersons serve as role models in language learning: the answer is a resounding no. However, at least one creative mind of our nation’s best minds – so they say – must feel very proud now. Allegedly, the best psychologists work in PR, marketing and advertising. Had I not known, I would never have guessed. Perhaps the perpetrator of this linguistic crime will be nominated for the next “Advertising Oscar”, which ad people award each other at their annual binge.

There is no trick to being a satirist when you have so many people working for you.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art.

Bane or Boon: Social Work in Teaching Foreign Languages

How do benign teaching methods contribute to learning a foreign language?
Reliance on the elusive spoken word, time-consuming games and teaching techniques peculiar to animators, group discussions unchecked for appropriateness, precision, and clarity, unbridled disregard of error-swapping in peer-editing and group discussions: To come up with a scheme to remedy the current sorry state, a rigorous analysis of what is going on at the receiving end in the teaching process, for instance, recording and analysing classroom activities, may be useful in reassessing the unhappy status quo.


How a failure in communication changed my life

With tongue in cheek, I relish telling this little anecdote about the origin of CTM or Communicative Teaching Method, which has been dogmatically and uncritically applied in teaching foreign languages ever since. I was a contemporary witness when a paradigm change took place and social work was implemented in pedagogy way back in the seventies. To illustrate my point, this is when I became aware of it.
Scottish Peter, as he was nicknamed, was standing before me, bent over with his hands supporting his massive body on his knees. He was swaying from side to side, alternatingly directing first his left ear and then his right ear into the direction of my mouth. All the time, he had a look of utter despair on his face, his eyes fixed onto my lips as if this were to facilitate his comprehension of what I was trying to say. Alas, it was to no avail.

I was trying to pronounce the word “vegetable”, but I must have gotten the IP alphabet symbols wrong when I taught myself some of the rudimentary things about the English language. Not unexpectedly, there were bound to be errors and in this case my pronunciation of the second syllable sounded like “table”, vege-table. No wonder Scottish Peter, who worked as a breakfast and vegetable cook in our small hotel in Guernsey for the summer season, had a hard time understanding me. And being a social worker by profession, he felt it his duty to blame himself for what he thought was his inability to understand my gibberish sort of language, rather than blaming my ignorance and inability to speak understandable, accurate and clear English. But can you actually blame him? He was probably the victim of the theory of CTM, which has, up to now, not been scientifically tested. He was probably thinking all the time when our little communicative comical act was going on that I was an underprivileged victim of society. It never crossed his mind that I might have been just too lazy to learn proper Standard English! Needless to say that this break-down in communication was no isolated incident and I resolved to do something about it to ensure I would always be understood with ease, if that should ever be possible.

At the time, I had little theoretical grounding on phonetics and grammar worth mentioning. My active vocabulary was about 700 words barely enough to engage in simple-speak-small-talk. English people are always polite and tried to make me believe that my English was good, which I knew was not. After my first stay in the UK, I began to work in earnest with authentic material to improve my English in all areas. In short, I began to “study” proper English largely on my own. It was common practice at the time that the native teachers did most of the talking, which suited me well since I was very much interested in “authentic English” and in most cases, I absorbed this as first rate model English like a sponge.

It was the time, when the responsibility for results to be achieved in language- teaching rested solely with the teacher. He was supposed to impart his or her knowledge of his or her language, his or her expertise on synonyms, near synonyms and varied structures, giving many contextual examples. And all of this was done skilfully, professionally and competently with a high degree of enthusiasm, fervour and zeal. I have it on good authority straight from the horse’s mouth that these days, teaching contextual English is considered “time-consuming”. They always handed out copies of the texts so that one was able to work with them at home in one’s own time and do the all-important revisions whenever one wanted to. Some of them had used their teaching material for more than twenty years without detriment to the motivation of their students. With this way of presenting material, I found that the retention rate was high, probably because of the affective element inherent in this teaching technique. Out of about fifteen teachers of English I have had, about four actually possessed those rare qualities. This high-calibre and talented kind of person was appreciated unanimously by all students, even the slow and lazy ones. What was mostly valued by most of us was his or her ability to give explanations eloquently and fit for printing, in short, he and she was a master of his or her language.

Then, there was a major change in teaching of foreign languages. I vividly remember the evening when I had my first encounter with the “Communicative Teaching Method” or CTM, as it was called. It was in one of those language classes for immigrants in South Africa. I had attended the language course for immigrants before and was surprised that we were about 80 people in the class on that evening as opposed to some 20 in previous classes. The new teacher divided the class into groups of four with the air of an expert as if he had had long years of practice in what was to follow. He then went around the class talking briefly to each group. Our group was the last he stopped by and after exchanging a few sentences with each of us he established that our group happened to be the most advanced group in the room. Since I was not interested in statistics and swapping errors with other non-native speakers of English, I stopped going to that class.

Only years later did I find out that the CTM had been introduced worldwide without any shred of scientific evidence as to its efficacy. And I have not seen any comparative long-term scientific studies of any given method combination!

But this was not my last contact with my pedagogic pet peeve. A university lecturer at Hannover University, who had read German at some American university, had been in the country for a number of years, cohabiting with a German woman for some time. Yet, he was unable to speak German; it was rather the gibberish sort – despite all the advantages of living with an educated native speaker, which is particularly conducive to acquiring a foreign tongue.

And he did insist on going by the CTM “Book” in his classes. One day he called to tell me that he had been most astonished to have found more than ninety students in his class at his university course on Shakespeare, and was eager and proud to explain that he had gone by “The Book” by having the students form groups of four and speak to one another in English. All he did was go round, ensuring that only English was spoken, while making sure he did not miss a single table of four.

I suppose that in the not too distant future this sort of hopping from group to group and “listening in” can be taken over by some language-surveillance computer or robot. This device would hover above the participants, the symbolic meaning of hovering being the authority or superior knowledge so badly craved for, ignore the quality of English spoken, emit some encouraging sounds at irregular intervals, tilt its metal head as a sign of attention, extend a pair of metal ears, duly pricked-up – and it could even be programmed to make some nodding movement, indicating approval – and it would not have to be in the right places because nobody would notice or care.

Needless to say that it is S.O.P. (Standard Operating Procedure) with CTM not to interfere, not to correct even bad mistakes and above all, to leave the talking to the group: “active speaking” right from the start. In other words: output without input.

Incidentally, the expression “active speaking” is part of a slogan used by a coaching company in Germany. Ever since I read their ad, I have been wondering what “passive speaking” may be like. “Silence” would be my best guess, also because it is what would be best these days in many cases.

There is no trick to being a satirist if you have so many people working for you.

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?

What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.

And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.

Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art

Will Smiley-Speak soon be all the rage?

A Global Language of Smileys as a Lingua Franca
What will the global language of the future be like? Perhaps a simplified standard English to accommodate the needs of the younger generation and local Englishes in each country as predicted by http://www.askoxford.com/globalenglish/?view=uk?. Or even a kind of Simplified-Simple-Speak as propounded by Dr. Joachim Grzega?

Or will sign language, as used by deaf and mute people, replace speech? My guess is that smileys will make it in both speech and writing. Right now, there are more than 5000 different smileys around and this won’t be the end of it. The Chinese language is proof of the viability of my novel idea. 3000 characters suffice to read a mainland newspaper and well-educated Chinese know about 7000 characters.

Initially, you will probably have problems pulling faces and sticking out your tongue when communicating with your boss or when you are given an audience by the Pope but once you have got the hang of it, it should become second nature. Teaching Smiley Speak must be fun too.

Smileys are exactly what the doctor called for to replace the trendy simple speak of the young and also BGE (Basic Global English). Even Dr. Joachim Grzega`s method of mutilating the English language does not go far enough and his Basic Global English, or BGE, will probably have problems holding its own. Some newspapers have already begun to run series of “interviews without words”, among which is the prestigious Süddeutsche Zeitung.

It does not seem to be beneath their dignity to publish a series of grotesque faces, contorted into ridiculousness. Nevertheless, they may be the forerunners of a new global speak, so treat them with the respect and seriousness due to them.

For more hilarious inanity click here:
http://sz-magazin.sueddeutsche.de/texte/anzeigen/26532

About this posting

This posting is part of a series dedicated to topics dealing with various aspects of the English language which usually get short shrift on the internet and in other publications. It is, in a wider sense, concerned with the English language crumbling into incomprehensibility at alarming speed and how society is influenced by it. How do schools and universities react and in what way is literature affected by all this? Furthermore, how do people working in education and linguistics cope with this avalanche of “Local English neologisms”?
What often sounds like modern Pidgin English can generally be put down to neo-pidginicity. It is an artificially accelerated and manipulated process – or rather linguistic genetic engineering – of attempting to oversimplify Standard English, the result of which is in all cases some sort of Neo Pidgin English or Simplified-Simple-Speak. Four major fields of contact contribute to the gradual encroachment on Standard English: Basic Global English, as advocated by Dr. Joachim Grzega, machine translations of any kind, unedited documents and publications – frequently of international validity – being passed off as standard English but in fact written by non-native speakers of English, the acceptance of “Local English” and non-native speakers of English teaching their version of “Local English”. The result of the English “produced” in all these areas of contact is often, at best, a barely elevated Pidgin English.
And to compound matters, Globish appears to become a composite haphazard mixture of all about 180 Local Englishes and may for that very reason not be as easy as some people think once it has evolved into a sub-language of Standard English.
Finally, it would be interesting to see the first book written in Basic Global English, Dr. Joachim Grzega`s novel and daring invention and see in which section bookshops will display such a work of art.

Follow

Get every new post delivered to your Inbox.