Translate

Saturday 29 October 2016

How Many Americans Think Planet Earth Is 6000 Years Old?

A Short History of the World

Most people these days think of H.G. Wells as a science-fiction writer, in fact as one of the pillars of the genre together with Jules Verne, but his best selling, most popular work in his lifetime was A Short History of the World which he wrote in 1922.  In the opening paragraph of this brilliant and monumental work, Wells writes:


A couple of hundred years ago men possessed the history of little more than the last three thousand years.  What happened before that time was a matter of legend and speculation.  Over a large part of the civilized world it was believed and taught that the world had been created suddenly in 4004 B.C though authorities differed as to whether this had occurred in the spring or the autumn of that year.  This fantastically precise misconception was based  upon a too literal interpretation of the Hebrew Bible, and upon rather arbitrary theological assumptions connected therewith.  Such ideas have long since been abandoned by religious teachers, and it is universally recognized that the universe in which we live has to all appearances existed for an enormous period of time and possibly for endless time.





The Universe is 6000 years old

The idea, which Wells describes, of the Universe having been created 4004 years before Christ was first established by the Archbishop James Ussher in the 17th century.  Since that time it has remained a logical assumption that if you believe in a literal interpretation of the Bible’s description of creation, then you must also believe that the planet is in the neighbourhood of 6000 years old.




How many Americans think planet Earth is 6000 years old?  When I asked a version of this question on Quora, the blasé, un-challenged answer that came back was 30+ million. 



Is evolution news?

The question of evolution became “news” once again in the wake of the Republican primary debates, when it was noted that none of the candidates would admit to believing in a scientific as opposed to Biblical explanation of human existence.  Dr. Ben Carson, in particular, was singled out for criticism because he is a doctor, and therefore a scientist.  To reject evolution, as many commentators have pointed out, is to reject science.  It’s pretty much impossible for someone to believe in scientific descriptions of the behaviour of molecules and DNA, and still not believe in evolution.

https://www.youtube.com/watch?v=wm-dKc9O0Nc






Why can't US politicians say "the Bible isn't literal truth"?

Dr. Carson’s answer was evasive and disingenuous.  The really important question is why couldn’t Dr. Carson, a recognized scientist, say that in our day scientific research has displaced the literal interpretation of the Genesis story of creation.  For as long as the Bible has been studied, it has been understood that the Bible can be interpreted literally or allegorically.  With scientific advancement, the Bible can still be read as an allegorical text whose primacy lies in the moral lessons of its sub-text rather than in its literal, historical accuracy.  Who are Dr. Carson’s imagined constituents who cannot accept the allegorical truth of the Bible over a literal interpretation?  Who are these constituents who cannot accept what Wells describes as a “universal truth,” accepted for at least the last 100 years among the literate, that our planet is older and our universe took longer to create than described in the Judeo-Christian Bible? 



Do they really exist?  When I asked myself this question, I concluded that this constituency can only exist within a population that doesn’t read.  The question becomes one of literacy.  



If you read scientific explanations or, for that matter, if you actually read the Bible, it becomes pretty obvious that the Genesis version of creation should not be taken literally although, like every good myth or legend, it does have some basis in fact.


How many Americans can't read?

How many Americans can’t read?  Here, for me, is the real shock.
According to a study conducted in late April by the U.S. Department of Education and the National Institute of Literacy, 32 million adults in the U.S. can’t read. That’s 14 percent of the population. 21 percent of adults in the U.S. read below a 5th grade level, and 19 percent of high school graduates can’t read.
[Since I first wrote this post, and continued to research illiteracy in the USA, I have discovered numerous claims that the American literacy rate is 97.7%, 99.9% and 100%.  In each of these three cases, the CIA World Fact Book is given as the source.  I have not been able to find literacy rates for the USA in the World Fact Book.]

The USA is the richest, most powerful and, allegedly, the most advanced country in the world, but in terms of literacy, based on the CIA’s World Fact Book, the average literacy rate for the world as a whole is 86.1%; making the USA slightly below average in terms of literacy.  The USA has better literacy rates than countries in dire poverty, facing protracted civil wars, and those countries which actively prevent women from learning to read,  but at an 86% literacy rate the USA lags behind China (96.4%), behind Cuba (99.8%),  behind Greece (97.7%), behind Jamaica (88.7%), Mexico (95.1) and Russia (99.7%); in fact, behind most of the stable nations in the world.  


Can democracy survive without literacy?

How do you conduct an advanced, sophisticated democracy when so many of your citizens can’t or don’t read?  As Wells points out, nations were able to exist and thrive through the invention of paper, then print, and in the USA in particular by being able to communicate across great distances using telegraph and railways and steamships.  How can you conduct an advanced, sophisticated democracy when so many citizens are prepared to believe that our Universe was created in seven days, 6000 years ago, because that is what they have been told, and so many leaders are prepared to kowtow to such beliefs?



Civilization and progress

Reading Wells' Short History of the World, you realize that civilization has progressed on our planet because of the double-edged swords of empires, technologies, religions and economies, which can spread knowledge, unify diverse peoples and promote peace and stability, but can equally create hegemony, inequality and injustice, and ignite civil and tribal wars capable of drawing the whole world into their vortex.



With a presidential election in the USA in ten days from now, I assume we will soon be relieved from the daily barrage of Donald Trump’s name and image and bombast—unless he marries a Kardashian (a possibility I would not preclude).  When you read the history of empires—Persian, Mongolian, Arab, Greek, Roman, Ottoman, European—it is impossible not to notice how the USA today shows all the signs of a well-established pattern of collapse:  irreparable internal divisions, widespread injustice and inequality, declining or stagnant quality of education, xenophobia and protectionism, imbroglio in foreign wars which the population neither supports nor understands, declining attachment to shared beliefs (including and especially in the American case in democracy itself), internal conflicts based on race, religion and economic class, decline in respect for leadership and the political class as a whole, economic decline and extreme indebtedness, an oversized military putting a strain on the overall economy, a marked decline in the physical and mental wellbeing of the average citizen (obesity, alienation, paranoia, drug addiction, etc), endemic egoism and radical individualism.  That Donald Trump is an icon of egoism and the reductio ad absurdum of radical individualism is of little importance, but what is truly nation-shattering is that so many Americans see him as representing them, as representing their thoughts and feelings and attitudes.  That is a fact and a fracture from which the USA will not soon recover.



PS:  I really got this election prediction wrong!

Sunday 19 June 2016

Something Rotten in the State of Grammar

Descriptive versus prescriptive grammar

I still haven’t recovered from the revelation that “grammatical mistake” isn’t a mistake.

English grammar is basically pattern recognition.  Once we recognize an established pattern in the language we attempt to maintain it.  Prescriptive grammar (which attempts to dictate how people should speak) eventually derives from descriptive grammar (how people actually speak).  Of course, “ain’t no denyin’,” that what some grammarians might take for egregious, fossilized errors, Everyman accepts as just “speakin’ plain.”



Can a mistake be grammatical?

It may be swimming against the current, spitting into the wind, and [insert your own cliche here] to challenge the evolution of the language and attempt to manipulate prescriptive grammar, but that’s what we pedants do.  Inspired by the expression “grammatical mistake,” I have come to surmise that there is something rotten in the state of English grammar.

Adjectives that end in "al"

I first conjectured that the problem could be located in how we use and misuse adjectives that end in “al.”  Typically a noun is used as an adjective and then we add “al” to give the adjective a new meaning, as shown here:

Noun Adjective “al” adjective

economics economic economical
politics politic political
logic                logic logical
rhetoric            rhetoric rhetorical
mathematics mathematics mathematical
grammar         grammar grammatical

Adding "al" changes the meaning of the adjective

The pattern shows that adding “al” changes the meaning of the adjective:  a “logic lesson” versus a “logical lesson,” a “rhetoric question” versus a “rhetorical question,” a “grammar book” versus a “grammatical book,” an “economic study” versus an “economical study.” 

My number of “al” adjectives (above) is quite small.  Like the proverbial blind monk attempting to describe an elephant by feeling its tail, I was perhaps considering an untypical sample.  Scientifically, I should be considering all “al” adjectives.  Ooops! Have you any idea how many words in the English language end in “al”?  The internet mocks me again by providing various lists of words that end in “al.”

This list offers 3544 “al” words:


Meaning of the suffix "al"

This site offers 1272 words that end with the suffix “al,”  and adds that the suffix “al” means “relating to,” as if to mock me once again for thinking “grammatical mistake” was a mistake.


What can we say about words that end in “al”? Most of them seem to be adjectives.  Nouns like “cereal” and “offal” are among the rare “al” nouns, but they also serve as adjectives.  It would be an exaggeration, if not an outright mistake,  to categorize “al” as a suffix in all the instances listed, if we mean by “suffix” something added to an already existing or independent English word.  

Is "al" a suffix?

For example “leth” is not an word, but “lethal” is.  


I would imagine that there is an etymological explanation that can trace “leth” as a Greek or Latin source and “al” as a suffix, but the issue I am trying to grasp is what happens within the English language when you add “al” to an existing adjective.  There are many “al” adjectives which have no form or root in English when you remove the “al.”

Among those that do, the adjectives seem to consistently show change.  What does the change mean?

humour         humoural
metaphysics metaphysical
physics         physical
abdomine abdominal 
chorus choral
allegoric         allegorical
analytic         analytical
commune communal
terminus         terminal
ecologic         ecological
structure         structural

Return to Baker and 1901

My conclusion is that Baker is still right and we should avoid “grammatical mistake” and, for that matter, “grammar mistake” in favour of “an error in grammar” or simply use the adjective “ungrammatical.”  The conspiracy of errors that we call modern English has created yet another obvious flaw because educated native speakers of English have lost track of how to use adjectives.  Instead we have come to blithely accept that “grammatical mistake,” “grammar mistake,”  “ungrammatical mistake,” and "mistake in grammar" all end up referring to exactly the same thing.  “Logical fallacy,” “illogical fallacy,” “logic fallacy” and "fallacious logic" would also all have to have the same meaning (and thinking about it, I have concluded that the phenomenon should still be called "sophistry").  We have muddled the subtleties and precision which, I assume, changes in spelling were originally intended to convey.

Wednesday 15 June 2016

“Grammar Mistake” or “Grammatical Mistake”: Which Expression Is Correct?

I  asked a version of this question on Quora, naively and mistakenly assuming that I would launch a groundswell  of support to stop people from using the expression “grammatical mistake.”  It seemed pretty obvious to me that something was either “grammatical” or a “mistake”; it couldn’t be both.  The word “grammar” is used as a noun modifier (actually every noun in the language can be used as a modifier), which we use for “grammar book,” "grammar teacher,” "grammar lesson,” so clearly the correct expression must be “grammar mistake.”  Imagine my surprise with the unanimous responses that there is nothing wrong with “grammatical mistake.”




I must admit that I was trying to be a bit too cute in how I formulated the Quora question:  “Isn’t the expression ‘grammatical mistake’ a grammar mistake?”  As a number of my respondents pointed out,   “grammatical mistake” isn’t a grammar mistake because it combines an adjective and a noun.  That’s how grammar works.  The expression may be semantic nonsense but that doesn’t mean it is an error in terms of grammar.

In truth, none of my correspondents would join with me in calling the expression nonsense, and would only go so far as to say that it might be taken as an oxymoron.  As Billy Kerr, patiently and clearly explained:

“‘grammatical’ has two distinct meanings.
Grammatical is an adjective: 1. relating to grammar. 2. well formed; in accordance with the rules of the grammar of a language
Mistake is a noun.
The adjective (in sense 1 - see above) modifies the noun. It’s perfectly grammatical (in sense 2) for an adjective to modify a noun, since that is the purpose of adjectives.
If sense 1 did not exist, it would not be ungrammatical, it would just be an oxymoron.”
Of course, "sense 1" does exist, so I can’t even save face by claiming that the expression is an oxymoron.  Could I claim it was ambiguous, a bit confusing?  Maybe, but not really.  When literate, native speakers of English unanimously claim that something is correct English, then it is correct English.  That’s how language works.
Still I was disturbed. Was it just that I didn’t like being wrong, especially about the English language?  Probably.  Why did I think “grammatical mistake” was a mistake?  Searching online I discovered this answer:
"The expression 'grammatical error' sounds, and is, in a sense, paradoxical, for the reason that a form can not be grammatical and erroneous at the same time. One would not say musical discord. . . . Because of the apparent contradiction of terms, the form grammatical error should be avoided and 'error in construction,' or 'error in English,' etc., be used in its stead. Of course one should never say, 'good grammar' or 'bad grammar.'"(J. T. Baker, Correct English, Mar. 1, 1901)
from http://grammar.about.com/od/fh/g/grammaticalerrorterm.htm
This discovery wasn’t all that reassuring since I found it on a web page called “grammatical errors” and it meant I was about 115 years out of date, and even Baker wasn’t willing to call “grammatical error” a mistake, just an expression to be avoided.  To add to my misgivings Baker’s example of “musical discord” was an expression I could imagine myself using.  Then there was my Quora correspondent  Bernard Glassman who acutely observed that the problem I was alleging would also have to apply to “hypothetical question” and “logical fallacy.”  Ouch.  I had never complained about “logical fallacy” but the expression suffered the same contradiction as “grammatical mistake.”

Reading (in fact, misreading) Edward Anderson, a third Quora respondent, I suddenly considered another possible meaning of “grammatical error.”  Could it mean that grammar was wrong?  Not anyone’s individual use of grammar was wrong, but that the rules of grammar themselves were wrong at some other level—in terms of semantics or logic or efficiency or clarity.

I have certainly sympathized with students who found it plainly stupid that “my brother is bigger than me” is ungrammatical and “he is bigger than I” is grammatically correct.  Traditional prescriptive grammar has created some fatuous notions like “split infinitives” and not ending a sentence with a preposition (on the grounds that you can’t do those things in Latin).  The most recent grammar controversy even has a name, the oxymoronic “singular their.”  Prescriptive grammar (pre-controversy) dictated that “Every student handed in his assignment on time” was correct grammar even if every student in the class was a woman.   This might be an example of a “grammatical mistake” but, of course, it’s not what people mean when they use this expression.

I haven't let go.  I need to pursue this conspiracy we call grammar and standard English further and deeper and wider.


In the interests of full disclosure, here are the responses of my Quora correspondents:


Billy Kerr, Native English speaker, from the UK.
127 Views

No, because “grammatical has two distinct meanings.
Grammatical is an adjective: 1. relating to grammar. 2. well formed; in accordance with the rules of the grammar of a language
Mistake is a noun.
The adjective (in sense 1 - see above) modifies the noun. It’s perfectly grammatical (in sense 2) for an adjective to modify a noun, since that is the purpose of adjectives.
If sense 1 did not exist, it would not be ungrammatical, it would just be an oxymoron.”

Bernard Glassman, Once a teacher of English, always, and annoyingly, a teacher of English.
103 Views

If "grammatical mistake" is itself an error in grammar, is calling something a "hypothetical question" equally erroneous, since it is, in fact, a question? What, then, is a logical fallacy? (This is getting to be way too much fun, but I would love to hear some other examples of those two, contradictory, meanings of “-ical.”)

Selena York, Business, Marketing, Finance, Insurance, Advertising, Consulting, Management,
8 Views

I always thought it was “grammatical error”. Either, or -

Kimberly Masterson, Editor, proofreader, writer in the United States
15 Views

Thanks for the A2A. Grammatical mistake is acceptable. My personal opinion is that grammatical error sounds better. Both are grammatically correct.

Edward Anderson, 7 years of Grammar School
29 Views

Interestingly, however, even if we stick by your chosen definition of #2, which is by far not the most commonly used one, the term “grammatical mistake” is still not a mistake in grammar. It is a syntactically well-formed phrase consisting of a noun and an adjective that modifies it. It is, at best, an oxymoron, like “jumbo shrimp,” “military intelligence,” or “president trump.”
In fact, there are entire classes of what you refer to grammatical mistakes, where the grammar is unassailable, yet still there is a mistake. We see them far more often in computer programs than in natural language. There’s the banana problem, where you run off the end of an array (so called as an homage to the grade-school child saying, “I know how to spell banana, but I don’t know when to stop.”) Then there’s the off-by-one error, where you store information in an array as if it’s zero-based, but retrieve it as if it’s one-based. The more formal term for these is not “grammatical error,” however; it’s semantic error.
You see, in English, “grammatical error” in common usage does not mean an error that is grammatical. It means an error in the grammar. And semantic error does not mean an error that is semantically well-formed; it means an error of semantics.

Billy Kerr 
Actually sense 1 existed first. “grammatical (adj.) 1520s, of or pertaining to grammar," from Middle French grammatical and directly from Late Latin grammaticalis "of a scholar," from grammaticus "pertaining to grammar".
So etymologically speaking, you have the timeline backwards.

Malathy GarewalNever learnt the grammar, but am a voracious reader and love the language.
95 Views • Malathy has 30+ answers in Grammar



Thanks for the A2A.
No, I do not think so.
I do understand the reason for the question, but I think here ‘grammatical’ is used as a qualifier for the kind of mistake made. Though I personally would prefer to say that something is grammatically wrong.
As for your reasoning of ‘grammatical’ versus ‘ungrammatical error’, think of substituting ‘typographical’ or ‘spelling’. While I can say something is a ‘typographical error and not a spelling mistake’, it would not be right to say ‘untypographical’. Hope that makes sense.

Sunday 1 May 2016

This Professor Should Be Fired for Defending What I Believe In

I call it the “ad hominem dilemma.”  Just to remind you, an “ad hominem argument“ is a logical fallacy defined as trying to win an argument by attacking a person rather than the ideas that person is trying to present or represent in a debate.  The dilemma I have just coined occurs when you like an idea, but you don’t like the person presenting it, or you like a person but you don’t like the idea or argument.  In an ideal world the dilemma disappears because you always agree with the ideas of the people you like—though you might want to have your intellectual rigour checked.

So you might feel torn when you discover that Hitler liked apple pie, and you like apple pie, but you don’t want to be identified as one of those apple-pie-eating Nazis.  Like me, you might have wanted to tear out your hair when Wayne Gretsky announced he was supporting Stephen Harper in the last federal election—you remember, the election Gretsky couldn’t vote in because of Conservative policy preventing non-residents from voting.  Tells you what years in the California sun can do to an otherwise sane Canadian hockey player.  

Then there’s the Donald Trump (aka Drumpf) phenomenon.  You may have heard the claim that an infinite number of monkeys pounding on the keys of an infinite number of typewriters (i.e, keyboards without computers) would eventual type the complete works of Shakespeare.  Trump Drumpf gets so much media coverage, without ever spelling out the details of his proposals, that eventually he is bound to make some vague promise that you agree with, and there you are facing the “ad hominem dilemma.”

Many women were dismayed by the outcome of the Jiam Ghomeshi trial.  It seems pretty obvious that consensual sex does not mean you are consenting to be choked and punched in the head,  but how the obvious was represented at trial was anything but clear.  Ultimately, the acute “ad hominem dilemma” has been provoked not by Ghomeshi himself (okay, being an anus is not a provable crime, but still he has been proven an anus) or by his accusers, but by Marie Henein, Ghomeshi’s lawyer.




Marie Henein should be a feminist icon, a heroine for all womankind, a tough, skilled, astute defence lawyer at the peak of her profession.  In fact, she is all those things and has become them by defending people accused of some pretty heinous crimes, including crimes against women--because that's what defence lawyers do.  Both Michelle Hauser in the Whig ("Mansbridge hit journalistic low point") and Tabatha Southey in the Globe ("Upset about the Jian Ghomeshi verdict? Don’t get mad – get informed") have broached the dilemma which Henein has provoked

The issue of my concern will seem trivial, insignificant and certainly pedantic by comparison to the justice system's futile struggles to prosecute sexual assault.  The object of my obsession is the course plan; what is usually referred to in colleges and universities as the syllabus (the “silly bus” that carries students from the beginning to the end of the course?).  Who cares about syllabi?  Well, I guess people of my ilk who know how to pluralize "hippopotamus"--pedants (which is generally an insult even though it just means "male teachers.")

I used to really care about course plans . . . a lot.  I didn't call them course plans or syllabi, I used to call them "the contract" and I would do this really pumped-up, earnest presentation in the first class explaining that this document was a contract between me and my students, that they had the right to object and make changes if they could persuasively argue that something I was requesting was unreasonable or there were better alternatives.  If the first class and "the contract" went well, chances of the course as a whole going well were vastly improved.

Then the worst happened. University administrators began to agree with me that course plans were really important.  The Chair of our department announced a new policy. In the name of providing the best possible education to our students, in future we would all submit our course plans for review at the beginning of each semester.  My colleagues and I objected to this new policy on three grounds:  1) it was redundant; the information that might concern the department was already available in the form of course descriptions which were regularly updated, 2) the requirement to submit a more detailed description of what we would be doing with students to an administrator seemed more like surveillance than pedagogy, and 3) it would lead to bureaucratization, the uniformisation and rigidification of all course plans.  Redundancy was undeniable, but we were assured that in no way did this new policy suggest increased surveillance or bureaucratization.  The new policy was implemented.

The first time I submitted a course plan, the department Chair took me aside--at the department Christmas party--to tell me she had reviewed my course plan and determined that I hadn't scheduled enough classes for one of my courses.  I had been teaching the course for ten years and the number of classes had always been the same.  How was this not surveillance, I wondered? A year later, under a new Chair, I was notified that the same course plan contained one too many classes.  Luckily for me, as a tenured professor, I could and did blithely ignore the instructions in both cases.  

A more damaging outcome for me was the bureaucratization of the course plan.  With each passing semester I received increasingly insistent and precise instructions on the form and content of each course plan circulated through the Faculty of Education and seconded by my own faculty. The upshot was that as I presented my course plan to students I realized that what they saw before them was a replica of every other course plan that had been presented to them that week. The chances that I could credibly describe the plan as a mutual contract were nil. Even the possibility that I might convince the students there was something distinctive in the syllabus, something worthy of their concentration and interest, was minute at best.  They would view the course plan as bureaucratic red tape, imposed as much upon me as it was upon them, and they weren't wrong.  In the name of "providing the best possible education for students," I was deprived of a useful pedagogical tool.



In recent weeks, reading reports online about Roberty T. Dillen Jr., an associate professor of "genetics and evolutionary biology at the College of Charleston," who was facing suspension for refusing to change his course plan for the university's suggested course "outcomes," I thought "a messiah, a Prometheus willing to sacrifice himself to give fire to university teachers everywhere!"  I read the article in which his Dean accused him of playing "Silly, Sanctimonious Games" and described complaints against Dillen Jr., including his self-confessed, impish penchant for deliberately misinforming students and refusing to answer their questions. Then I read Dillen Jr.'s defense of his resistance: "Why I’m Sticking to My ‘Noncompliant’ Learning Outcomes."

My ad-hominem dilemma:  despite my conviction that course plans should be the purview of teachers not administrators, everything that I have read (especially his own words) leads me to the conclusion that this Robert T. Dillen Jr. is really an ass.  His only motivation seems to be that he likes being an ass and his pleasure was redoubled by the fact that he could get away with it.   As a tenured professor he can be an obfuscating, obstreperous lump of inertia who doesn't even have to logically defend himself and no-one can do anything about it, or so he thought.

Dillen Jr. has been teaching for 34 years.  He was consulted, advised, warned, and presented with alternative "outcomes" which he rejected. Still he manages to feign bewilderment, as if he were the only calm rational mind in this brouhaha rather than its provocateur, and asks rhetorically:  "How could such an apparently minor disagreement escalate so far, so fast?"

I am irked, in the first place, because Dillen Jr. could not have done a better job of undermining all university teachers in their efforts to control the presentation of their own courses.  When university administrators argue that the syllabus must be administered by the university and not left in the hands of eccentric egg heads, Dillen Jr. will be the precedent they cite.

But I am also outraged by a university professor's vain display of elitist, aloof, opinionated incoherence.  In lieu of "course outcomes," in his syllabus, Dillen Jr. inserted a quotation from a speech given by Woodrow Wilson at Princeton University in 1896.  In his apologia, Dillen Jr. offered three justifications for use of this quotation as the learning outcome of a biology course:  1) he and Woodrow Wilson were born 10 miles apart, 2) both he and Wilson "were Presbyterian professors"  and 3) that Wilson "seems to be so universally despised." 

Here is the Wilson quotation which Dillen Jr. used as his "course outcomes" and cannibalized for his rhetorical self-defence:
Explicit Learning Outcome. "It is the business of a University to impart to the rank and file of the men whom it trains the right thought of the world, the thought which it has tested and established, the principles which have stood through the seasons and become at length part of the immemorial wisdom of the race. The object of education is not merely to draw out the powers of the individual mind: it is rather its right object to draw all minds to a proper adjustment to the physical and social world in which they are to have their life and their development: to enlighten, strengthen, and make fit. The business of the world is not individual success, but its own betterment, strengthening, and growth in spiritual insight. ‘So teach us to number our days, that we may apply our hearts unto wisdom’ is its right prayer and aspiration."— Woodrow Wilson, 1896
Beyond the ludicrousness of his justifications, the gross absurdity of Dillen Jr.'s using this quote as the cornerstone of his refusal to accept and adjust to authority is that the quote and the Princeton Commencement speech from which it is taken and even the Bible quote which it cites (and Dillen Jr. re-cites) are all explicit refrains of the theme that the individual must accept and submit to the direction of higher authorities, including "the social world in which they are to have their life"--exactly what Dillen Jr. is refusing to do.

No-where in his exposition does Dillen Jr. show any interest in what his students might (or might not) be gaining from his stubbornly repeated use of Wilson's quote (encouraging Princeton grads to enlist for the Spanish-American War) for his "course outcomes."  The university's decision that Associate Professor Robert T. Dillen Jr. "would be suspended without pay for the fall 2016 academic term" strikes me as a set back for all good teachers and a gift to the students of genetics and evolutionary biology at the College of Charleston.


Addendum

Princeton University decides to remove Woodrow Wilson's name from its building because of racist history.



Monday 21 March 2016

The Art of Complaining

“Complain, complain, that’s all you do
Ever since we lost!
If it’s not the crucifixion
It’s the Holocaust.”
L. Cohen

In my brief (five years) and tiny tenure as an administrator responsible for an array of university programs, one of my duties was to receive student complaints.  Students usually had real or at least honestly perceived grounds for complaint.  The typical complaint was about the quality of instruction or the instructor of a particular course.  Frequently, the student would announce a shift of discourse with the phrase “It’s not the reason I’m here, but . . . .”

The irony of the situation was that if a student wanted to complain about a grade or even the evaluation of a particular assignment, that was a situation I could easily deal with--and that was the point students would take twenty minutes to get to.  The university had rules and procedures in place for reassessing a mark.  As I discovered the hard way, the university provided no legal means for dealing with a lackluster or incompetent teacher.  Like the psychoanalyst of the how-many joke trying to change a lightbulb, I could only change an instructor if he/she wanted to change.

Being faced with complaining students reminded me of early days as a steward in my ESL teachers’ union.  The principal duty of a steward was to represent and counsel teachers through the grievance procedure, and we were given a weekend-long course on how to grieve (the legalistic verb for “to complain,” not a synonym for “to mourn”). Step one and rule one of the grievance process was to know what my brother and sister union members wanted; that is, what outcome, of a limited number of possibilities, they were looking for from the grievance.  Sounds simple, right?   I found this advice to be logical, compelling and useful, but the objective is what people most frequently lose track of in the process of complaining.  This lack of focus is, I believe, what gives complaining a bad name.



Decades later at a university department meeting, one after another, my colleagues were complaining bitterly about how prolific and quick students were to complain.  I interrupted the brouhaha to suggest that complaining was a good sign; it meant students cared and, furthermore, I was thinking of preparing a module on “how to complain” for one of my courses.  My colleagues were not amused.

I really believe that complaining is beneficial, that we all benefit from those who have the wherewithal and courage to complain.  They are the whistle-blowers of everyday life, but the problem with complaining is one of degree, of frequency, of being identified with the “boy who cried wolf” once too often.  The conundrum for the would-be complainant then becomes the proverbial “separating of the dancer from the dance”: how to complain without being a complainer.  Although I was told when travelling in Europe that it paid to pretend to be French because the French were known to complain and would, therefore, get good service, I was never able to empirically verify this hypothesis--but it makes sense.

I have also been warned that my attitude toward complaining as being about outcomes was masculinist.  (Excuse the gender stereotyping here.  I’m just the messenger.)  I have been informed that when a woman is complaining, a man’s suggesting a (pre-emptory and perfunctory) solution to a problem simply compounds a woman’s frustration and irritation.  It took me a while to understand this instruction, but I have come to recognize the universal principle that the less you know about a problem the easier it is to imagine a solution.  If you (and I) immediately see an obvious, quick and easy solution to a problem being presented to us, chances are we have failed to understand the details and recognize the complexity and intricacy of the issue.

There is a phenomenon that is usually identified as complaining but is really “self-expression”—as vague as that locution is.  Sometimes it is necessary or at least healthful to decompress, to vent, to exhale with expletives.  What passes for complaining is often just thinking out loud.  Sometimes we just need to hear our own words (in my case read them) in order to clarify our own thinking to ourselves.



I used to be a fan of the television series House.  Dr. Gregory House, out of context, always sounded like he was complaining, but he was carrying out a process of “differential diagnosis.”  I didn’t quite know what that meant until I read Crilly’s definition of “differential calculus.”  Both cases are studies of change:  what has changed, what needs to change, the speed of change, the meaning of change, the prognosis and prescription for change.   Complaining is a differential science and a differential art.



Thursday 17 March 2016

“Let’s End the Myth that PhDs Are Only Suited for the Ivory Tower.” Really! Why?

Let’s End the Myth that PhDs Are Only Suited for the Ivory Tower.”  This was the headline for an opinion piece in the Globe and Mail written by Queen’s University’s Dean of Graduate Studies.  The article reminded me of meetings our tiny caucus of English teachers used to  have once or twice a year with our faculty’s Dean.  Invariably, at some point in the meeting, the Dean would turn to my colleague who was responsible for our section’s graduate programs and ask:  “How many new admissions do you have for next semester?”



Everyone in the room knew, with the possible exception of the Dean (and I suspect he may have known as well) that, at this point, we had two or maybe three new admissions.   Invariably my colleague would look surprised and begin to shuffle papers.  Having briefly occupied his position before he did, I had a rough idea of the bafflegab he was preparing to deliver, but I was never as good or practised at it as he was.

“Well,” he would begin, “in order to give the most up-to-date numbers I would have to include the inquiries that the secretary passed on today. With those six, and two students from France who emailed, and of course we have identified eight of our own BA students, and two MAs, as well as the returning students, and that’s right, Theresa Somebody, a really excellent candidate will be coming in as soon as she gets confirmation of her funding request . . . .”

Thanks to the miracle of my colleague’s loaves-and-fishes rhetoric,  the Dean could claim that our two new admissions appeared to be somewhere in the neighbourhood of twenty-two.  As long as no-one insisted on knowing accurate numbers, we all had plausible deniability when, next semester, our graduate seminars turned out to be the size of tutorials.

The “Let’s End the Myth” piece in the Globe is an extension of the same desperate smoke-screen rhetoric designed to dissuade anyone from asking for accurate numbers.  

Sure, let’s end the myth that PhDs want to be professors, and at the same time we can get rid of the myth that people who go to medical school want to be doctors, or that people who go to law school want to be lawyers, or that people who study accounting want to be accountants.  Or, we could go the other route, and take a look at accurate numbers for the occupational outcomes of PhDs and deal with the situation we know exists.

Before we get to the big question, we should stop to consider that—at best—only 50% of people who start a PhD in the humanities actually finish.  The average length of time to complete a PhD is seven years, the mode is ten.  Of the lucky 50%, how many, after five, seven, or ten years of hard work and study, get the tenure-track university positions which at least 86% of them have declared they covet?   And the answer is . . . wait for it . . . we don’t know!

How can we not know something as basic as how many PhDs get tenured jobs?  Just like my colleague who had to hide the fact that we only had two new students, universities in general have to hide the dismal outcomes for PhDs.  To reveal the numbers would put courses, programs, prestige, credibility, funding and ultimately positions at risk.

What do we know?  According to the available statistics, which are somewhat out of date (from 2011) and optimistically inaccurate, 18.6% of PhD graduates got full-time teaching positions in universities.  “Full time” does not mean permanent.  Crunch those numbers!  You start with 100 PhD students, at best only 50 of them complete the hard, five-to-ten-year slog and successfully complete the degree.  Of those 50 successful PhDs we don’t know exactly how many but we do know that fewer than nine (9 of the original 100) got the tenure-track university jobs which, for the great majority of them, were the goal of the PhD in the first place.

Given these depressing outcomes, you might imagine that universities are working hard to increase the number of tenure-track positions or downsize doctoral programs or a bit of both.  On the contrary, from everything I have read, the dominant priority of universities is still to maintain or increase PhD enrolments while hiring small armies of underpaid and underprivileged adjuncts, sessional and part-time lecturers to do most of the teaching.  Why? You might well ask.

Crudely put, screwing over PhDs has for decades been the national sport of academia.  For the university as well as individual departments and programs, the PhD student is a cash cow for government funding.  Additionally, PhDs are still a mark of prestige.  Universities identify themselves as either PhD-granting (one of the big boys) or non-PhD granting (not so big) institutions.  Individual professors applying for research grants will have their prospects vastly enhanced if they can point to PhD candidates who will serve as their research assistants.  The lucky professorial few who win the grant money can use it to travel to conferences in Miami, Mexico and Honolulu, while the bull work of their research projects is carry out by their minimum-wage PhD research assistants.

Despite the foggy and evasive arguments that universities might put out suggesting that the problems and the solutions are incredibly complicated—they aren’t.  The solution isn't for MacDonald's or Rogers or Walmart to hire more PhDs.  The PhD has to be the minimum requirement for teaching in a university (with rare, obvious and fully justified exceptions) and for any other significant position within the university hierarchy for that matter.  Any PhD holder who is teaching at a university should automatically qualify for tenure.  The ballooning administrative and support budgets of universities need to be transferred to pedagogical objectives.

As I pointed out in an earlier post “universities have a vertical monopoly, being both the exclusive producers and major employers of PhDs.”  It’s time universities began to acknowledge and correct their abuse of this monopoly.




Why Is the Vagina Masculine? And What’s the Alternative?

“Vagina” is masculine  I first came across this factoid thirty years ago in Daphne Marlatt’s novel Ana Historic .   It came up again more r...