Forensic linguists are often called to court to answer one or both of two questions; what does a given text say and/or who is the author.1 In order to answer these question linguists draw on sub-areas of descriptive linguistics including; semantics, the meaning of a word, phrase, sentence, or text, phonology, the study of the sound system of a given language and morphology, the admissible arrangement of sounds in words.
This paper is looking solely at the second questions; who is the author? In particular the issues arising from authorship attribution, this report will analyse its real usefulness in criminal trials by looking at previous case law, and then continue by discussing if authorship attribution should be used at all, focusing on issues such as reliability of method, sample size and the possibility of an individual idiolect.
This paper will not consider the individual jurisdictions, however it must be noted that USA courts are far more hesitant about the use of forensic linguists during criminal proceedings.
What is Authorship Attribution?
Authorship attribution, also known as authorship identification/comparison, is just one area of forensic linguistics that has been exercising minds since the time of ancient Greek playwrights.2 The practice itself involves determining who wrote a text when its author is unclear by inferring characteristics of one text onto another; forensic linguists look for markers not just in the words themselves but also in the grammar. This is done by looking at all linguistic domains such as syntax, lexicography and those mentioned in the introduction, each of these are governed by rules but grammar offers the writer choices which make it distinct.
Authorship attribution can be applied to both written and spoken text, due to the vastness of this area this report shall only focus on its application of written text in criminal trials, including confessions and evidence. Although this does narrow the scope of this report, the amount of case law and academic commentary available is more than sufficient to provide a detailed analysis.
Does it have real usefulness?
In order to understand and discuss the usefulness of authorship attribution it is important to look at cases were both the defence and the prosecution have been successful.
R v Bentley3
This case involves a posthumous pardon after the defendant; Derek Bentley was hung for his participation in a crime involving the death of a police officer, this was despite the fact that Bentley had surrendered and was in police custody when Craig, his accomplice, shot the officer while resisting arrest. As Craig was under aged he was only sentenced to life imprison. At his trial Bentley claimed that the police had ‘helped’ him his statement, which they denied.
Coulthard was commissioned to produce a report on the disputed confession; one of the marked features identified by Coulthard was the frequent use of the word ‘then’, which was identified as the 8th most frequent occurring word. After looking at other witness statements and police reports, both relating to Bentley’s case and to others, finding that the use of ‘then’ was more frequent in police registers (appearing on average every 78 words), than in other types of statements Coulthard concluded that Bentley was not the sole author of his statement and that police officers at the time had in fact co-written the statement.
The Unabomber Case4
Between 1978 and 1995 several bombs were sent to property of both Universities and Airlines throughout America including Utah, California and New Jersey, killing three people and injuring dozens. In 1995 a letter was sent to the New York Times threatening further violence unless a 35 000 word manifesto was published, both the Times and the Washington Post did so.
Three months later a man approached the F.B.I claiming the manifesto sounded like it had been written by his brother. The authorities tracked down and arrested his brother, Theodore Kaczynski, and performed a linguistic analysis between the memo and a 300 word newspaper article on the same topic. The F.B.I claimed that there was evidence of common authorship due to shared series of lexical and grammatical words and fixed phrases.
The defence did contract their own linguistic expert who argued that the vocabulary can have no diagnostic significance as anyone can use any word at anytime. The defence focused on twelve particular items including clearly, propaganda and thereabouts. After searching the internet, which was a fraction of the size it is today, the F.B.I found that there were around 3 million documents which included one or more of the twelve items. However when they searched again for instances with all 12 items they found there was only 69 results and on further inspection found that these results were internet versions of the original manifesto. The argument of the defence therefore failed and Theodore Kaczynski was convicted of the crimes relating to the bombings
As you see from the cases above authorship attribution can be used both for a prosecution’s and the defence’s case. However if we consider that the defence only has to prove reasonable doubt, and the fact that the burden of prove lays on the prosecution, it is safe to conclude that the real usefulness of authorship attribution is with the defence. Although not mentioned above, there have been numerous cases were the defence have successfully used authorship attribution as evidence; such as the Birmingham Six and Guildford Four appeals.
Should Authorship Attribution be used at all in criminal trials?
In order to answer this question it is necessary to look at some of the arguments surrounding the use of authorship attribution. As with many legal matters there are masses of academic opinion, each supporting and arguing different aspects,
As yet there is little to no academic agreement on methodology and technique,5 due to this only one technique shall be discussed, this being ‘word frequency’ the main reasons being that it was one of the techniques used in the above mentioned. As well as this, the use of statistics shall also be discussed a general issue.
Both the Unabomber and the Bentley case used word frequency to help determine the authors of the relevant text. Coulthard focusing on the use of the word ‘then’ while in the Unabomber case the focus was on lexical words.
Word frequency is a popular method used as an indicator of authorship, despite this it is has been subjected too much criticism. One criticism is that of randomness of assumption.6 This assesses whether the word is used as a stable rate across the authors work, this cannot be assumed for content words as their frequency is determined by the subject matter, however it may hold for function words. Damerau7 concluded that function words were useful in occasional studies of attribution, but that they were, in general, not good indicators of authorship as random manner of function word use is unlikely to be consistent for one author.
However Damerau’s conclusion is not commonly accepted as he only considered the works of four authors, therefore the stability of function words recognised by him could be different from that of the norm.8 Meaning that the use of function words is still a valid technique for authorship attribution as they can be subject to a combination of stylistic and thematic indicators
Further problems have also been identified, this surrounds the issue that issue that, although vocabulary has the potential to be wide and variant, the amount of words used on a daily basis is minimal. Tallentire for example noted that 90% of the text in literature volumes in English libraries consists of 10% of the English vocabulary.9 Despite the critical views taken against the use of word frequency it remains a popular technique and is continually used by forensic linguist in the search for authorship.
It has to be born in mind that forensic linguist do not rely on one method or technique alone, however as yet no one has been able to show absolute validity and reliability in their methodology.
The common denominator between the different techniques and methods applied by forensic linguist is the use of statistics. Although the need for an objective methodology justifies the need for statistical analysis,10 at present there is too much emphasis on its use. Although it is used to validate the results given, the simple use of statistics does not give validity to attribution techniques11.
Another issue is that many statistical techniques are burrowed from other disciplines, for example the Efron-Thisted tests are from butterfly collecting and the Modal analysis is derived from signal processing. Although the techniques have been adapted for there purpose, there is little to no reference to the ‘Proposed Criteria for Publishing Statistical Results.12
b. Sample Size
Most of the samples sizes available for analysis by linguist forensics are small, usually less than 200, the samples detailed in the cases above go against the norm. Most cases involve the analysis of threatening or suicide letters, which by their character are small.
Much debate of authorship is centred on the reliability of the methodology, undermining a fundamental question as to what size of sample is adequate for a reliable attribution. Eder13 concluded, after completion of an experiment, that using 2 500 word long samples was still not sufficient enough to produce reliable results. Therefore when looking at results in a criminal case it is important to not only consider the reliability of the sample size but also the size of the sample, particularly when the cost of being wrong can have such dramatic effects; failed attribution is much better than false attribution.14
c. Individuality of Idiolect
One thing that needs to be considered, if only briefly is the theory of an idiolect, this is a theoretical position that native speakers have there own individual and distinct version of a language and this can be established in the particular choices an individual makes in texts.15 Over time each individuals builds up a large active vocabulary, the words that are used and the preference of those words will vary from person to person. Authorship attribution has been compared to fingerprinting; however as discussed no one method has validity or reliability. Another comparison is that of DNA, which defines its matches in terms of probability. Although this seems a more favourable comparison the thought of a linguistic ‘DNA’ is senseless.
At present over 1,000 style markers have been found, there are still several to be found. There has been a call for the identification. However, as is clearly evident in literature, styles change over time. The way which society in general speaks now is different from that of Tudor times and even more different still from the Roman Era. This is true also to the language used in an individual’s lifetime; language is susceptible to change not only through education but also social changes; if we consider the typical Joe Blog becoming educated in the field of law for example, their language is not only effected by the introduction of legalese but also through the advancement of education.
Although authorship attribution is useful both for cases of the prosecution and defence there are clear flaws with the reliability and the validity of the methods and techniques used to establish authorship. It is possible for forensic linguist to use multiple methods and techniques, therefore increasing the reliability of there results, however, as it is so easy to find flaws with authorship attribution I still maintain that the real usefulness of authorship attribution, in criminal trials is with the defence.
When using this evidence in court, due to issues surrounding its use, I believe that for the basis of fair justice that it should not be the main basis of any argument, and should only be used as supporting evidence for either party. Until more reliable and valid method’s are established the hesitation shown by Judges is justified, particularly as the legal system relies so heavily on laymen (jurors) who can be susceptible to societal responses and therefore swayed by the slightest evidence.