Tag Archives: AI

Summing it up “intelligently” or simply “copying and pasting” sentences?

Summarizing Software  versus  Human Agent Précis Writing

 To what extend do summaries generated in a natural test environment live-up to product descriptions and how do they compare when they are pitched against summaries written by human agents? And how do the four summarizing products tested compare among themselves? Do different summarizers come up with the same results when fed the same text? So, is it plain low-level algorithm-based “copying and pasting” techniques versus higher order thinking skills in humans? In an analysis running 100 pages, I set out to find out exactly that.

Throughout the tests, no traces of human-like intelligent capabilities have been found in machine-generated summaries

With regard to “intelligent” properties, summarizers do not live up to the promises made in product descriptions. Commercial computer software (summarizers) cannot produce summaries which are comparable to those produced by human agents. Throughout the tests, and not unexpectedly, no trace of human-like intelligent capabilities has been found in machine-generated summaries. The methods summarizing software use are plain low-level algorithm-based “copying and pasting” techniques generating summaries in an automaton-like fashion. They are not the results of a mental process; current summarization software is incapable of generalization, condensation or abstraction. Summarizers extract or copy out or filter out original sentences or fragments in the right sequence but content-wise in an unconnected fashion. Summarizing software cannot distinguish the essential from the inessential; it cannot abstract the essence of original texts and condense them into a restatement (onto a higher conceptual plane). Summarizing software lacks the properties of abstract thinking, of analysis and synthesis. It has no insight; it cannot interpret facts and grasp concepts, let alone the wider overtones of any given text e.g. deliberately used humour, sarcasm and irony or biased tendencies. It cannot order and group facts and ideas, nor can it compare and contrast them or infer causes. Neither is it capable of condensing text into pithy restatements nor can it reproduce text into paraphrased abridgments, nor can it recast sentences at the most elementary level for the time being.

Detailed analysis

Direct comparisons of human brain functions used in précis-writing and summarizing softwares´ algorithms get short shrift in academic papers, at least in those available on this subject. Readers interested in this subject, yet unaccustomed to reading off-the-beaten track topics, may find this text interesting.

Reporting structures are the most frequent structures used in any language, yet little emphasis is placed on this fact in education and (foreign) language training as any textbook analysis will reveal. “Summarization is one of the most common acts of language behaviour. If we are asked to say what happened in a meeting, what someone has told us about another person or about an event, what a television programme was about, or what the latest news is from the Middle East, we are being asked or invited to express in condensed form the basic parts of an earlier spoken or written text or discourse.”1 Often, such summaries are shortened or constricted or abstracted onto another level. But they are very often the same verbs, verbal phrases etc. as used in reported speech.

In this test series, I compared summaries produced or created by human agents – also called abstracts, synopses, or précis – against extracts generated or copied out by various summarizing software or programs (software agents, also called computational agents). All human agent sample summaries have been taken from Cambridge Proficiency Examination practice books (UK English). My point of departure was the hype by software companies who all too frequently endow their summarizing software with human qualities, bordering on “personification”. Most of the product descriptions and reviews would have us believe that their computing power is fully comparable with human brain power. We are promised that these programs can determine what the text is about, extract the gist of any text, pinpoint the key concepts, and reduce long texts to their essentials. What is more, we are made to believe that they can analyze a text in natural language, “taking into account its structure and semantic relationships” and even get an in-depth understanding of the underlying idea.

I have taken up the challenge posed by said overblown statements, which represent summarizers often as “intelligent”, and pursued the question if the various commercial summarizing software programs available are mere number crunchers based on algorithms simply extract or copy out sentences and fragments, or whether they possess some kind of artificial intelligence akin to that of humans. This important difference between abstract thinking in human agents and automaton-like properties of summarizing software is looked into in some detail and supported by stringent test results confirming the superiority of the human brain over the unthinking, machinelike properties of summarizing software.

Is software “intelligent”? – And are human agents “truly” intelligent?

In academic papers, product descriptions for commercial summarization software, and generally in the field of AI, the term “intelligent agent” is frequently used in connection with software or software components. The degree to which present-day computer software, and summarisation software in particular, is “truly intelligent” is seldom a principal object of investigation, be that the lack of supposed relevance or the unavailability of investigative papers readily available to the public. Summarizing software being artificial intelligence (AI) software is said to be capable of generating complete summaries (extracts) which are sometimes misleadingly called précis, synopsis or abstracts, all terms which, rather, describe human-agent- produced summaries. In this analysis I have addressed the issue of the often ambiguous hype surrounding summarizing software. All too frequently, it is openly or implicitly invested with a human-like intelligence.

 By gauging its performance in tests in which summarization software competes directly with human-agent-produced summaries taken from textbooks preparing for the CPE (Cambridge Proficiency Examination), I have explored its computational competence and the current state of supposed “intellectual” quality of results generated, or lack of both. Most present-day commercial summarizers and those tested in this analysis use the method based on extraction with summarizers copying out (copying and pasting) key sentences from the original text. In contrast, the abstractive summarization method is based on natural language processing, meaning the software needs to “understand” and “interpret” truly the original text and reproduce its most important information it in its own “words” in an abridged form. Present-day commercially available summarizing software using the abstraction method cannot do this satisfactorily and if there are any non-commercial summarizers in operation, they are difficult to check on.

Users hardly ever get complete, connected and readable summaries

Product descriptions assure potential buyers that summarizers can determine what a text is about, pinpoint core messages and key concepts and thus reduce long texts to their essentials. One software company even wants to make users believe that their summarizer can analyze text in natural language, “taking into account its structure and semantic relationships” and even get “an in-depth understanding of the underlying idea”. Furthermore, readers are promised that they can spend considerably less time understanding the general meaning of any document or website by just reading machine-extracted summaries without missing key information. However, the tests have shown that summarization softwares` machine-reading-comprehension properties lack accuracy since users hardly ever get complete, connected and readable summaries.

None of the summarizers generated reliably consistent, complete and impeccable extracts to be used as first-stage drafts for human agent editing

According to software companies, summarizers are mainly used as a time-saving reading aid, a kind of complete executive summary which supposedly allows the reader to spend considerably less time understanding the general meaning of documents, getting familiar with their structure and reading without missing key information. In order to meet the highest of standards, they would have to deliver consistent results and generate faultless and complete extracts. However, as it is often the case, theory does not square with practice at all since the tests show that the summarizing software tested is incapable of generating acceptable summaries due to a number of shortcomings outlined below. Neither was any of the summarizers tested able to distinguish itself from the others in any conspicuous way, save the number of irrelevant ideas generated, nor did any summarizer generate reliably consistent, complete and impeccable extracts which could be used as first-stage drafts for human agent editing.

 Almost always, summarizers will extract the first sentence or the first two sentences because they have been programmed to do so since these are deemed to be lexically loaded and contain the essence of the text. If the first sentence contains conflicting, subordinate ideas or anecdotal content, it can make the summary less useful if not downright wrong with the summarizer extracting the negligible first sentence(s) at the expense of more salient ideas which may then not be extracted because of settings limiting the choice. This matter is aggravated when the first sentence is long and contains trivial subordinate “ideas”. Summarizers cannot recognise these irrelevant parts and do not leave them out as human agents would. The latter holds true for all compound sentences extracted.

Extracted or filtered out sentences lack cohesion and resemble bitty bits scattered across the pages

Filtered out or extracted sentences lack unity; they are disjointed and scattered in list form across the page or highlighted in colour in similar fashion. Most of them have the appearance of brute-force-copied-and-pasted text fragments with the summaries generated by Microsoft Word’s summarizing function being the only exception. The latter constricts the sentences selected into impressive-looking paragraphs, thus making it seem that a lexical interrelationship between the “key” sentences selected is preserved. However, a more refined analysis shows that this may only be partially true. The fact that contextually unconnected sentences are placed one after the other under the false pretence that text cohesion is created or retained does not make a better summary or reading any easier. In one case a “novel” but wrong or misrelated grammatical relations was established which was not present in the original and which substantially changed the meaning of the extract under investigation. It is safe to assume that this is no isolated incidence, particularly when sentences begin with a pronoun.

Routinely, the majority of the supposed key sentences extracted are of minor importance or completely irrelevant. Different summarizing programs filter out different “key elements” and in one case the most important idea was completely missed out or “overlooked” by all four summarizers tested. Looking at it from an end-user’s perspective, one can reasonably expect that all summarizing software copy out identical key sentences. Nonetheless, there is all too often a too great a difference in the sentences extracted. In one randomly chosen case, there was only a 30% agreement on the text extracted (measured in number of words) between two summarizers. With longer texts, results were even more varied, which casts some doubt on the algorithm-based selection mechanism employed by different software makers.

When the nature of the original text makes summarization software look good

There are two examples exemplifying that the nature of the original text can make any summarization software look good. In one of the tests there has been an acceptable level of computational sentence extraction achieved. The other example was appendixed to an academic draft paper for easy verification. In the first case, it is the high number of equally relevant key ideas given when the choice of sentences extracted did not matter. As a result the summary is balanced and even human agents could have made subjectively tinged choices without seeming to have missed out key ideas. The second example from the academic draft paper is entirely written in reported speech and gives the appearance of being connected and relatively coherent. It can be deduced that reported speech or reporting structures with different introductory verbs or phrases serving as semantic links which provide local text cohesion is “summarization software friendly” in general. Or distort test results accordingly as these semantic links were written by human agents and duly filtered out or copied by summarizers. Thus it would be an example of extreme partiality to pass this copying process off as a software achievement. All things considered, I think that the test results give rise to speculation about whether acceptable sentences extracted are just fluke hits and, therefore, no final conclusions can be drawn.

 There is the issue of the optimum text length for machine-generated summaries. When done by human agents, full précis writing is usually about 1/3 in text length as opposed to partial or incomplete summaries which concentrate only on certain thematic aspects. It is save to assume that this is the optimal text length for full summary writing since it has stood the test of time. Perhaps the standard setting for machine-generated text abridgments should be raised from 25% to 35%. Together with the next-generation of AI software, this is likely to render better-quality extracts and provide a better balance of key elements extracted, particularly in long texts (1000 or more words of original text length).

Summarizing software as a first-stage drafter for human-agent précis writing – currently an act of faith and not quality editing

Summarizing software is also meant to be a kind of first-stage drafter for human-agent précis writing, providing a short-list of ideas with the human agent smoothing them or bringing them into a more acceptable, i.e. coherent format. At least, this has been predicted by some linguists. The commercial software tested is not suited for this purpose and I do not know if there are more sophisticated summarizing programs other than those commercially available. If summarizers are ever used as fully functional first-stage drafter, the role of human agents would be confined to just connecting and polishing sentences extracted by summarizers into a readable, coherent format. In this case, human agents would serve as mere text editors without having to read the original text themselves. Anything else would be self-defeating and make summarizing software redundant. This also poses the question whether human agents may use machine-extracted draft-summaries in good faith as a basis to rely on to produce coherent abstracts – be they on a higher level of abstraction or just barely edited machine extracts ‒ without reading the full text, which would then be an act of faith and not an act of reason. Present-day summarizing software is not up to par to be used as first-stage drafters and I am very much interested to learn if the next generation of summarizers will still operate on a lower order of “thinking”.

Disconnected and incomplete summaries – a new way of processing information?

On the subject of scattered and disjointed and incoherent sentences in summarizer-software generated summaries as discussed in this analysis, there are new, related phenomena to be observed in other areas. According to the linguist Raffaele Simone, “a new way of processing information” has developed marked by the predominance of less complex over more complex structures. Incoherent machine-produced summaries with disjointed sentences are certainly less complex than coherent human agent abstracts. In a wider sense, bulleted lists and the limitations of MS PowerPoint and similar software are further points in case for this new way of processing information. Software used for presentations is conspicuous by limited writing space hardly suitable to be the carrier of more complex ideas. Further examples of this new way of processing and presenting information can be found in some UK tabloid online newspapers with “uncluttered” single sentences displayed with generous spacing but without paragraphs. In education, a new kind of exercises in language teaching which favours matching and arranging unconnected or isolated sentences have to a large degree replaced more difficult and long comprehension exercises. Thus they constitute less complex structures, facilitating quick visual perception of easy-format alphabetical information “at a glance”.

 A retrograde evolutionary step?

In what way these developments are to the detriment of higher thinking order capabilities in human agents can at this point in time not be objectively established for the absence of any studies readily available on this subject, save the reports on the alarming decline in average intelligence among 18 year olds (2008). This was verified by two reliable German sources. Moreover, it stands to reason that a shift from traditional summary writing involving higher order thinking skills to accepting machine-made, disjointed extracted sentences in summary writing and (unintentionally) dismissing the training of abstract thinking faculties in education as negligible, may be an evolutionary throw-back starting any time soon. However, I should point out that in no way am I insinuating that some kind of deliberate behavioural conditioning is going on to adapt human agent mental capacities to the limited, number crunching properties of software.

User acceptance – what is really known about what users think?

With summaries generated by software being as unsatisfactory as they are, it is surprising that there are no verifiable test results or critical reviews readily available. Little is known about what users really think about the quality of summarizing software. Perhaps people have different views about what a key idea is or they are satisfied with partial and irrelevant extracts when they find what interests them. Or they may fill in the gaps left in machine-generated summaries from their own prior knowledge and experience, thus correcting faulty summaries or supplementing missing information while reading them without bothering about quality. Maybe users assume summaries to be good and / or it is their unshakeable belief in computer experts and their software which makes them accept anything machine-produced because, having grown up in a largely uncritical environment, they do not know otherwise. Furthermore, it cannot be precluded that some users may want to vent their dissatisfaction with deficient summarization software but they lack the ability to find the weak spots and articulate their frustration accordingly. More discriminate users may be resigned to putting up with what they deem to be barely mediocre software-generated-summaries due to their low level of expectation as they have become accustomed to not expecting too much.

 What compounds this issue is the fact that the human brain tends to attribute sense to any “input”, meaning that even downright wrong summaries can be interpreted as “intelligent” and well-founded because users assume that the computer is infallible, and hence summaries make sense to the person reading them. This fact was confirmed in a test-series of trick-lectures which did not make any sense at all. Yet, educated native speakers found the lectures “comprehensible” and “stimulating” and believed in the authority of “Dr Fox”, an actor hired for this purpose.

Software generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out

The evaluation of the test results with regard to intellectual properties ascribed to summarising software was, of course, a foregone conclusion. The difference now is that I have shown in some detail the difference between how human agent summaries are created and software summarizations are generated. The machine generated summaries are far from being “intelligent”; they are difficult to read with little text cohesion, disjointed sentences scattered across the page and too many irrelevant sentences extracted or copied out. Generating extracts the way they do at present, summarization software is dispensable, the main reason being that they completely lack higher order “thinking skills”, properties indispensable for recognizing key messages and conceptual ideas in a text. At present, summarization software could not even be used as first-stage drafts for human agent editing. I think that users will have to wait for the next generation of AI intelligence software before summarizers can be fully relied on. Hopefully, the next generation of software takes data processing to a true level of natural language processing. Until such time, one had better use the advanced search function in search-engines to pre-select topics of interest and rely on one’s own close reading and / or speed reading techniques.

1 SUMMARIZATION: SOME PROBLEMS AND METHODS, John Hutchins, University of East Anglia: [From: Meaning: the frontier of informatics. Informatics 9. Proceedings of a conference jointly sponsored by Aslib, the Aslib Informatics Group, and the Information Retrieval Specialist Group of the British Computer Society, King's College Cambridge, 26-27 March 1987; edited by Kevin P. Jones. (London: Aslib, 1987), p. 151-173.]

July 2011



Computer-Aided-Criticism Software  and Machine-Translation Software—Problems and Potential


Understanding any text in all its subtlety is a prerequisite when translating from one language to another (and exceedingly desirable in literature appreciation).  Like human translators machine translation software should have this capacity. Computer systems have proven to be very poorly suited to a refined analysis of the overwhelming complexity of language. State-of-the-art computer software used in machine translation purporting to do just that still leave much to be desired, as one can easily verify by having translations done by any of the many machine translation tools available on the internet. Since text interpretation is the common denominator, machine-translation software is similar to the one used in computer-aided-criticism, if not identical.

My arguments highlight lesser known problems encountered in computer-aided criticism and may serve to foster a better understanding of present-day machine translation capabilities and its undeniably huge potential in the future, be that in 50 years or more. Machine-translation software and related analytical software are still in their infancy and just like children, they deserve our indulgence. They are bound to become better and better over time. There are different approaches to machine-translation processing of written human language in translation software. I favour Google’s method because – be tolerant with my oversimplification – they try to replicate what the brain does by using as many human translations as possible as sample translations for their database together with other methods (algorithmic mathematical languages). In the long run, this combination of methods is likely to render better-quality translations which will then be indistinguishable from human translations. By that time, this improved software will also be able to decode and translate connotational meaning.

For the time being, it is beyond the grasp or analytic capability or interpretational power of machines, be they computer-aided-criticism software, translation machine programs, grammar correction and summarizing software, to consistently distinguish between such shades of meaning let alone connotational meaning. However, machine translations will become better in time, if not human-like. AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings.

Background information

Although I wrote the main body of the text almost 30 years ago, it is surprisingly up-to-date due to the fact, that comparatively little progress has been made in this field. In January 1983, with only a modicum of theoretical background on computers and linguistics, I planned to write this essay as a fervent riposte the editor of the American journal Psychology Today, which had published a two-part series called, “The Imagination Extenders,” in November and December 1982. I never sent the letter and apparently no one else did, probably because computer software was only just beginning to emerge and therefore hardly anyone was capable of spotting the weak points in Mr Bennett’s arguments.  Here is an excerpt from the original article from Psychology Today:

Computer-Aided Exploration of Literary Style – A tool to a better understanding of literature?
In the two articles, the question is posed whether computers will be able to extend our imagination the way telescopes and microscopes extend our vision. Philosopher Daniel C. Bennett of Tuft University (U.S.) says they will in two ways: 1) by spreading the range of our senses and 2) by enlarging the amount of our concepts. He speculates that there are hundreds of telling patterns in the number system suitable for computer analysis and suggests that computers be used to study, among other things, literary style. He says that, being a rather clanking and statistical affair, analysis of word choice and style is a delight mainly to pedants and he wonders whether the subtle, subliminal effects of rhythm and other stylistic devices – often quite beneath the conscious threshold of their authors – can perhaps be magnified and rendered visible or audible with the help of a computer. The features the computer would heighten could be abstract patterns, biases of connotations and evocations or intangible meaning – not matter.
Incredible as it may sound, Mr Bennett’s bold claims went unchallenged. Not one single letter to the editor was subsequently published. I wish to emphasize that it is not my “mission”, but my objective to contribute to the discussion about machine intelligence from a practician´s point of view with years of experience in analysing texts – in the traditional manner.

Under close Scrutiny — Computer-Aided Exploratory Analysis of Literature

Literature appreciation by just reading for pleasure is one way of gaining meaning from a piece of art – formal literature analysis is another. Owing to the works of Jung and Freud, as well as novel approaches towards language from the new fields of neuro-linguistics and psycholinguistics, literature analysis can be highly rewarding, especially if combined with the notion of contemplation. An understanding of the interaction of the many literary devices and techniques is the more academic way of finding out what a writer says and how he says it. Therefore, style, which is the object of my exploration, can be an important clue to understanding “meaning” in a piece of literature. What is style then? Style is the outward reflection of the intrinsic sum- total of everything a writer is at the moment of writing. Style flows from an author’s character in its broadest sense and from his life experience.

Not only does a writer express ideas of which he is aware but he also reveals subconscious ideas and conflicts. Very often, he has no knowledge as to why he chooses a particular word over another – a word that may arrive at the threshold of his consciousness like a shooting star from a vast cosmos of subconscious beliefs, suppressed desires, cherished ideals, primordial instincts, mechanized scripts and from the plane of archetypal symbols before it is clad in reason and logic.

How would a computer know in what way style contributes to meaning?

Style is as individual as a fingerprint. No two styles are ever the same and very often, the same word or outward shell, the same sound pattern has an entirely different meaning when used by another writer or in a different context. How then, would it be possible for a computer to analyse style? Even if it should be possible to programme a computer to enable it to recognize hundreds of literary devices and to make generalizations from particular examples, how would it recognize or process the fingerprint of an entirely different writer who uses language in a new and original way?

How can a computer extract meaning from a writer’s three-dimensional web of associative meaning created by the power of one single word, if the computer knows only the husk or dictionary definition of a word but not its contextual essence, its personal and private elements, its fugitive associations and flashing connotations lived and experienced by the author?
The silent speech of metaphorical language, the body language of language (my own term, but I may be wrong there) of imagery and symbolism cannot be expressed in digital numbers or in any other form in the number system since there is an infinite number of possibilities of combining words and creating meaning in ever new groupings and juxtapositions. This problem is further aggravated by the fact that words do not mean the same to different people. Since no two contexts or situations in which words are learnt and used are ever the same, no two meanings, or in this case, interpretations, can ever be the same. One could argue that a computer would alleviate this problem in that it could be an impartial judge as to what meaning a particular word should have in all cases at all times. Yet, this solution would be unacceptable as language would be manipulated, unnatural and bland. Apart from this dumbing down of words, this would smack too much of George Orwell’s “Newspeak”. However, one does need to invoke fiction in order to fully understand the impact such action would have. ( The “bias and sensitivity guidelines” used by pressure groups in the US educational system afford a glimpse of what may be in store. Added in2006)

Mr Bennett speculates that subconscious notions expressed through the medium of style may be made visible or audible. Would this not signify that a writer’s most secret thoughts, sometimes even unknown to himself, could be projected onto a monitor? Moreover, could this rendering of a colour-coded graphic display representing conscious and unconscious thought-patterns or associative configurations be interpreted and fully understood? Would we need another expert telling the literature “expert” what the particular graphic display unveils? Who would decode the meaning encoded in the colour-graph? Who would interpret the computer’s interpretations of the, for instance, “delicate effects of sound”? The meaning to be unearthed from the colour-graphic display on the monitor would be as enigmatic and complex as literature is to many people.

In order to establish personality profiles, psychologists attempt to read a person’s subconscious mind by analysing his speech pattern, his choice of words i.e. his preferences. But it will never be possible to penetrate a person’s subconscious mind and read the pictures, the language in which the subconscious mind “thinks” or communicates. Mental, invisible images, the evocations of the conscious, semi-conscious and unconscious mind cannot be recorded. Abstract ideas, mental pictures produced by evocations and connotations flowing from the composite elements of style, or even from a single word, are not subject to the law of mathematics and cannot be caged in the number system.

Literature analysis is allegedly a delight mainly to pedants, says Daniel Bennett. Does this over-generalization not contain a number of dangerous and narrowing assumptions and suggestions? Could it not be misconstrued to mean that a more profound analysis or appreciation is tedious, done only by pedants and that anyone in his right mind should never attempt to appreciate and enjoy literature by taking a closer look at it than usual – and that an “expert” analysis should be left to the computer? Are such bold claims not preparatory to reshaping and simplifying human cognitive and intuitive abilities, leading us into a yes-or-no-response-Brave New World?

There is more to appreciating literature than counting words. It is an essential characteristic of the appreciation process that through the reading experience itself, by meeting with an author’s ideas, content and substance gain a quality they would otherwise not possess. This is because a reader brings in his own thoughts, his experience and his feelings. Marlon Brando, who started his career by playing Shakespeare on Broadway, said in an interview that unless the reader gave something to it, he would not take anything from a book or poem. One could not fully understand what a writer was writing about unless oneself had some corresponding depth, some breadth of assimilation. Computer-aided analysis of literary style may completely leave out the reactions of the reader. The responsibility would be shifted to the “expert” computer who would do all the thinking and linking. Will human and humane feelings in literature analysis be entirely discarded and computer-encoded responses become the controlled measure of all literary works?

How would a computer “communicate” with a piece of art? Admittedly, it could be fed with a few individual images and then be programmed to boil them down into generalisations which it would apply whenever it encountered a digital approximation of meaning pre-programmed or assigned to a particular word or combination of stylistic devices. If a more sophisticated programme should ever exist, it might even be able to match two or three literary devices from among the thousands of possible combinations and relate to a particular phrase, sentence, or paragraph. But how would the computer attribute sense to what it finds out? In its binary interaction with a piece of literature, the computer would compare its rigid, predetermined and static programme with the real world of natural experience and communication processes inherent in a piece of literature. How would a computer know, for instance, that a horse and its metaphorical or symbolic meaning in one piece of art might not be the same in another? In what way would a computer know why a particular word has been chosen over another and why certain words have been placed side by side to create a certain effect which may be lost if one word is exchanged for another? Not only must a great number of dictionary definitions and some of the most common examples of collocation be fed into a computer but it must also be enabled to distinguish non-compatible synonyms. However, the computer must also be able to “intuit” an author’s conceptual understanding of his private and personal usage of any word, even if his meaning varies only in very subtle degrees from common usage. Would it be sufficient to feed dictionary definitions into a computer, which are only a short abstraction of real-life usage? Would it suffice to give the computer knowledge about a sandbox-life which the computer could never relate to experiences of his own?

One has to concede that word choice is an intrinsic part of style. Still, how does the mere counting of words cast light on meaning? A key -word may be used deliberately or, sometimes, without the author’s awareness. It may be used only once and would gain significance in its context; it may be used soberly or passionately, prudently and meticulously, it may be used with missionary fervor or calculatingly only once, while another, quite insignificant word may appear frequently. This is because sometimes, even in this rich English language, there is a lack of other words that express the author’s intention adequately. How does one programme the computer to know that quantity or frequency does not equal quality or essence?

Technically, all stylistic devices of sound could be made visible on a monitor. Yet, would the computer “compute” the “right and only” meaning to them? Could it relate meaning created by specific sound patterns distributed over several pages to the main theme, to the pivotal points of that particular piece of art or could it judge them to belong to secondary ideas only? More importantly, would the computer be able to assimilate many other literary devices, such as symbolism, irony, puns, hyperbole, metonymy, oxymoron and above all, conceptual metaphors, which may all run parallel to meaningful sound-patterns, into a coherent context? Could the computer make intelligent distinctions between several possible interpretations that belong to the realm of surface meaning? More importantly, could it delve into deeper regions and soar into higher spheres by evocations and connotations created by sound patterns and other literary devices sandwiched onto sound? Would the computer be able to synthesize meaning from several layers of literary devices, from among the hundreds of composite elements of style, from the multi-level flow of sound and imagery?

How can the most common of hundreds of literary devices other than those describing sound – for instance, metaphors, similes, oxymoron and symbols – be made visible on a computer’s output device, since most stylistic devices do not function through sound? Literary devices, or rather stylistic devices (since most authors could not care less what they are called) are the author’s medium of expressing their ideas. They are their musical instruments with which they make images audible and they are their palettes transposing sound into images. Their artistic reflections, observations, contemplations or speculations, after having gone through the alembic of their inner worlds, become a unified piece of art and very often, the symbolic language used by authors renders their work seemingly unintelligible, requiring technical, perhaps arcane, and sometimes-abstruse knowledge on the part of the reader. Even the most dedicated readers of literature may not be able to understand a work of art in its entirety and some may not be able to see beyond its storyline. At the most, they may notice the pleasant effects that can be created by sound. How can a computer programmer, whose forte is probably not the appreciation of literature, programme a computer “to see” beyond sound, to read between the lines, or to recognize a sustained sound-pattern and describe its effects?
Above all, how does one programme the integrating principle which distinguishes between nonsense and sensible ideas, and which may, through flashes of insight or intuition, arrive at new ideas? That integrating principle has not been found yet – that all-important assimilation and joining-device that is capable of attributing sense to an infinite number and variety of external stimuli and to the internal, invisible, silent and yet ever-changing world of thought-configurations.


Still Begging the Question after 28 Years

After almost 28 years, there is still no progress in the field of computer-aided textual content analysis. Computer systems have proven to be very poorly suited to a refined analysis of the overwhelming complexity of language. Conclusions drawn by people working in this area are equivalent to crystal-ball-gazing. Due to the absence of any results, the future role of computer-aided criticism is often still invoked. Up until now, computer-supported analysis of texts has not yielded any important or new results which cannot be obtained by close reading. Therefore, computerized textual research has not had a significant influence on research in humanistic disciplines. Many explanations as to why there are no useful applications with regard to the subject matter sound like feeble attempts to justify the use of computers in this field at all costs. Catchphrases used to this end are “a shift in perspective needed”, “asking new questions deviating from traditional notions of reading texts”, and “the need for new interpretive strategies” or “a modified reader response”. They all refer to hitherto unknown structures not readily apparent which are hoped to contain vital elements of literary effects. If they cannot be recognized by humans, are they important at all? This would be tantamount to assuming that authors create a “subconscious” pattern over sometimes even several hundreds of pages. What are we actually missing?

Stylistics and reader response seem to be treated as two different approaches and both methods are deemed “problematic” when it comes to assessing the literary effect measured in the text itself or in its supposed impact on the reader. The author’s intent expressed in his communication with the reader and meaning are deemed more difficult to quantify than matters of phonetic patterns and grammatical structures.

Authorship studies are one area where computers can be used effectively even though very little analysis is performed by the computer itself. Very small textual particles and word clusters selected and indexed by humans are run through computers to establish an “authorial fingerprint”. Thus, complex patterns of deviations from a writer’s normal rates of word frequency are measured. There are other patterns which can be used in such authorial tests like compound words and relative clauses.

Interestingly, professional translators hardly use machines to do their translations with. I know from my experience that it is harder to edit a machine-rendered translation than translating it again from scratch. However, translators sometimes use computer-aided translation devices such as Trados or Word Fast, which contain the complete memory of all translations a translator has ever done with this tool. With such a CAT tool (Computer-Aided-Translation), one can determine the number of words to be translated and hence suggested as one goes through the translation. As to the quality of such translations, very little research has been done in this field but I surmise that the resulting style may sound “bitty” if other than technical texts are translated.
It would be interesting to see how computer-aided-criticism software and machine-translation software cope with the hundreds of “Local Englishes”, and, being one of the major subject matter of this weblog, with “German English ” or  “Local English, German Version” in particular. Would this be too taxing a task as it is often for human translators when they need to waste a great deal of time guessing the meaning of idiosyncratic word coinages and very private grammar “novelties”? It must be a formidable task to write software that can handle the frequently faulty and incomprehensible English one finds in these hundreds of Local English varieties.

More Crystal Ball Gazing?

When I searched the Internet for the latest development in computer-aided exploratory textual analysis of literature, I was surprised to find that I had not been wide off the mark in my assessment 28 years ago. As to the future development, AI experts think that low-level, algorithmic mathematical languages will be succeeded by high-level modular, intelligent languages which, in turn, will finally be replaced by heuristic machines. They would be capable of learning by mistakes, through trial and error, of operating like human beings.