I started in earnest reading about academic freedom a couple months ago. I'm quite perplexed. Lemme try to sort out a couple perplexities.
Historically, in the US, academic freedom and tenure have been intricately linked. Tenure's key legitimating purpose, it is said, is to protect academic freedom: a tenured professor cannot be fired without due process, so that professor cannot be eliminated by a university administration merely for unpopular, controversial, or critical utterances. (If true, this would mean that non-tenured faculty and all contingent faculty have no de jure right to academic freedom.) An immediate question that arises is: what kinds of utterance? Critical of the university administration? Critical of colleagues? Unpopular in one's disciplinary field of research? Unpopular politically according to dominant ideologies in the US? Controversial regarding electoral politics or political issues? Or regarding sexual mores, or the high cost of gasoline?
In significant and well-known cases of tenured professors being fired, typically what has led to the firing are comments that are rather outrageous, from the standpoint of dominant political ideology in the US. For instance, Ward Churchill called the dead from the World Trade Center terrorist attack "little Eichmanns," which was nasty of him.
It is not clear that due process is routinely followed in these cases. Instead, an administration abruptly fires a professor, and legal and quasi-legal proceedings ensue. AAUP is called in to investigate, lawsuits are filed, all hell breaks loose. But it isn't tenure that protects this professor from being fired.
The cases in which tenure does protect a professor are probably not well-known, precisely because the effort to fire a professor that runs afoul of academic freedom fails because due process is followed and protects the professor. Because we don't hear about the case (no doubt the process would be confidential), such cases don't present evidence that tenure protects academic freedom.
My skeptical assessment of this situation is that one would take tenure to protect academic freedom basically on faith. One would also take on faith what kinds of utterance would be protected.
Thus my first perplexity: whether tenure, viewed as a process, is something that can protect academic freedom. Not if tenure works the way Marc Bousquet describes the process in How the University Works. My own take on it is maybe slightly less trenchant than the always delightfully trenchant (to me anyway; he rubs a whole lotta people the wrong way) Bousquet.
When I've heard or talked to tenure-track professors, candidates for tenure, about their work lives, academic freedom does not come up. Workload is about all they can talk about, and they barely have time to talk about that. They are desperate to publish as much as they can, to teach whatever they are told to teach, and to do whatever mundane committee work they are told they have to do, in order to satisfy and overwhelmingly exceed stated requirements for tenure. If academic freedom is supposed to cover unpopular, controversial, or critical utterances, tenure candidates do not have academic freedom, because they would never go anywhere near such utterances before reaching tenure. Plus, everyone they talk to tells them this.
So, once tenured, professors have academic freedom, and let that criticism flow forth, yes? No. Once tenured, professors seek promotion to full professor status, and they do so by continuing the work they did as tenure candidates. Although they may acknowledge that tenure protects them from dismissal, they know it doesn't protect them from not being promoted.
Besides their own pecuniary interests, tenured professors who are more obliging would be prudent to consider what consequences their critical comments might bring upon their academic departments, colleagues, research funding, and other benefits bestowed by administration. Very nice tenured, full professors are extremely cautious to avoid critical intramural utterance because they believe that administration will punish their criticism by denying tenure to their colleagues, or by denying their departments a much-needed tenure-track employment "line," or by cutting their budgets outright.
This leads me to a second perplexity, for another day: Perhaps academic freedom is not supposed to protect intramural utterance? Or is only meant to protect utterance within an academic discipline?
small minds, like small people, are cheaper to feed
and easier to fit into overhead compartments in airplanes
Friday, April 25, 2014
Thursday, April 24, 2014
Hegel's super-skepticism
At the end of the preliminary, critical section of the Encyclopedia Logic, Hegel notes that what has just transpired -- to wit, a thoroughgoing criticism of the history of philosophy of logic and ontology in 100 pages -- could have been achieved through skepticism about all presuppositions. In other words, instead of the detailed work Hegel has done, he could have begun by saying something like "hey, kids, you know what? Everything you thought you knew about logic is wrong. Now let's start over."
Why not just make that quick move? There's a pretty strong history of throwing out presuppositions and re-starting ontology. It's a move that allegedly permits foundational certainty, that means our knowing will be complete, real, 100% knowing, not only organic but also pesticide free and shade-grown, etc. Presuppositions, you see, are the genetically modified organisms of ontology. You're never entirely certain what they're made of or that they're going to work out the way you planned, and by the time you realize it, they've already irreparably mutated and cross-bred with everything you're growing. The skeptical move in ontology is the insistence that we start with virgin soil, virgin seeds, the pure sun, and water from untainted mountain sources. Start over again, you see, after razing what had been there before.
Hegel says doing so would be "sad," but more to the point, redundant, because the approach he's going to take to systematically construct ontology will do that work along the way. There's a constant negation of half-thought half-logic, abstraction and incompletion in Hegel's system. It picks up every single philosophical idea and perspective, both historically and systematically, and subjects each one of them to this negation. I can try to explain this basic move in Hegel's thought with the example of "immediate knowing."
Hegel says that one position on knowing is that we immediately know: this knowing cannot be justified in terms of what else we know, or in terms of our evidence, or anything. It is exactly like faith. This form of knowing isn't unfamiliar. Take this: "I know that this dog is speaking to me with the voice of god." Now, a claim like that cannot be given evidence. It cannot be justified in terms of other things the speaker knows (you can't say, "... and I know this because..."). This idea can only be asserted as true, and this assertion can only rest on itself. Immediate knowing, as a position about knowing, says that knowing cannot be tested or proven.
The skeptical move here would be to say that we shouldn't believe anything merely asserted, because it presupposes that the speaker isn't crazy, that a dog could possibly speak, that there is a god who could and would speak through a dog, etc. etc., and thus debunk the claim to know.
Hegel's claim is this: not only can't there be evidence either for or against a claim to immediate knowledge, but the claim to immediate knowledge can't be immediate. If "I know this dog speaks to me with the voice of god" can only be asserted as immediate knowing without any justification, that assertion, to have the content that it has, to have the meaning that it has, cannot be asserted immediately. "I know that this dog is speaking to me with the voice of god" requires that the proposition itself, to have any meaning at all, says something that even the speaker must be able to evaluate the truth or falsity of -- or else it is not a claim to know, at all. In other words, it can't be immediate, because it is in relation to something else that would be able to tell us whether the sentence is well-formed, says something predicable, etc.
What I think this means, about Hegel's view of philosophical positions about knowing, is that every positive stance about knowing that commits the error of being one-sided is not merely false (which they are, because they are one-sided), but that none of them can be meant as they are meant. Every philosophical position-taking is hypocritical.
Every philosophical position-taking is hypocritical.
"Except Hegel's?" you're asking. Or your dog is asking.
Yes, except Hegel's... insofar as Hegel doesn't take a position. The truth is the whole, if played out consistently, means that he can't take a position (or, technically, that if and when he does, he then undermines it).
So, skepticism isn't skeptical enough, because it's only skeptical that positions are true, or that any position could be true. Hegel's skepticism is that the position isn't what it is, and the position-taker can't take the position.
Far frickin out.
Why not just make that quick move? There's a pretty strong history of throwing out presuppositions and re-starting ontology. It's a move that allegedly permits foundational certainty, that means our knowing will be complete, real, 100% knowing, not only organic but also pesticide free and shade-grown, etc. Presuppositions, you see, are the genetically modified organisms of ontology. You're never entirely certain what they're made of or that they're going to work out the way you planned, and by the time you realize it, they've already irreparably mutated and cross-bred with everything you're growing. The skeptical move in ontology is the insistence that we start with virgin soil, virgin seeds, the pure sun, and water from untainted mountain sources. Start over again, you see, after razing what had been there before.
Hegel says doing so would be "sad," but more to the point, redundant, because the approach he's going to take to systematically construct ontology will do that work along the way. There's a constant negation of half-thought half-logic, abstraction and incompletion in Hegel's system. It picks up every single philosophical idea and perspective, both historically and systematically, and subjects each one of them to this negation. I can try to explain this basic move in Hegel's thought with the example of "immediate knowing."
Hegel says that one position on knowing is that we immediately know: this knowing cannot be justified in terms of what else we know, or in terms of our evidence, or anything. It is exactly like faith. This form of knowing isn't unfamiliar. Take this: "I know that this dog is speaking to me with the voice of god." Now, a claim like that cannot be given evidence. It cannot be justified in terms of other things the speaker knows (you can't say, "... and I know this because..."). This idea can only be asserted as true, and this assertion can only rest on itself. Immediate knowing, as a position about knowing, says that knowing cannot be tested or proven.
The skeptical move here would be to say that we shouldn't believe anything merely asserted, because it presupposes that the speaker isn't crazy, that a dog could possibly speak, that there is a god who could and would speak through a dog, etc. etc., and thus debunk the claim to know.
Hegel's claim is this: not only can't there be evidence either for or against a claim to immediate knowledge, but the claim to immediate knowledge can't be immediate. If "I know this dog speaks to me with the voice of god" can only be asserted as immediate knowing without any justification, that assertion, to have the content that it has, to have the meaning that it has, cannot be asserted immediately. "I know that this dog is speaking to me with the voice of god" requires that the proposition itself, to have any meaning at all, says something that even the speaker must be able to evaluate the truth or falsity of -- or else it is not a claim to know, at all. In other words, it can't be immediate, because it is in relation to something else that would be able to tell us whether the sentence is well-formed, says something predicable, etc.
What I think this means, about Hegel's view of philosophical positions about knowing, is that every positive stance about knowing that commits the error of being one-sided is not merely false (which they are, because they are one-sided), but that none of them can be meant as they are meant. Every philosophical position-taking is hypocritical.
Every philosophical position-taking is hypocritical.
"Except Hegel's?" you're asking. Or your dog is asking.
Yes, except Hegel's... insofar as Hegel doesn't take a position. The truth is the whole, if played out consistently, means that he can't take a position (or, technically, that if and when he does, he then undermines it).
So, skepticism isn't skeptical enough, because it's only skeptical that positions are true, or that any position could be true. Hegel's skepticism is that the position isn't what it is, and the position-taker can't take the position.
Far frickin out.
Monday, April 21, 2014
academic freedom, an introduction
I suppose most people who teach in one of America's Colleges and Universities™think about academic freedom once in a while. I've been thinking about it lately in relation to the stuff I've done on faculty ethical responsibilities and what they could mean for faculty who work in precarious employment situations. At times, I have asserted that academic freedom does not exist for a lot of us, but that something similar applies for some of us, because of institutional neglect and ignorance of our roles and even existence. I call this similar thing academic license, to distinguish it from an ethically and politically bounded concept like academic freedom. Academic license would be the condition of one's work, opinions, research conclusions, and public statements not mattering enough to be subject to surveillance or limitation. It would be, undoubtedly, totally precarious. Under academic license, what I do and say would not matter at all up until the very instant that, for whatever reason, or for no reason, it leads to my dismissal. Since this is the condition of precarious academic employment in general, the idea of academic license merely provides a way to emphasize that, institutionally, the content of what precarious faculty do never matters.
I'm starting some deeper research on academic freedom. My early feeling is that most of what's discussed as academic freedom is missing a major point. A great deal of the discussion of academic freedom concerns political ideology, faction, public statements by professors met by official responses, and efforts by what we call neoconservatives to target academics and academic programs that they find offensive.
Here's the thing: when I read about Horowitz and Campus Watch and all those people trying to stop academics from criticizing US imperialism or the symbolic violence of compulsory heteronormativity, I think about my own ideas about such issues. They make up the idea of campus radicals in order to rile their mobs to attack socially critical academics. But I'm at least as radical as most of their prominent targets. Why don't they target me?
(I suppose this reveals that I'm a little envious of the Certified Academic Big Shots who are famous enough to matter to crazy people. Most of them make a lot more money than me.)
They don't target me because I don't exist. They don't target me because my stupid university barely exists. (As I've said before, I love my stupid university.) It's not the ideas that matter to them, it's the publicity, obviously, because they operate the same way terrorist groups do. The vast number of America's Colleges and Universities™are like my stupid university, in that we're like the water supply. If they wanted to kill the ideas, they'd attack the water supply. But they want to scare, so they attack the big buildings, which here is metaphorical for Certified Academic Big Shots.
Much much more on this to follow, I expect. For now, here's another idea about my own condition of academic license.
I am not starting this as a "research project." I have no "research projects," because my research does not exist: it has no meaning at my stupid university, and I have no place of prominence in my academic field, largely because of my non-ranking employment status. I have no measure for tenure or promotion to meet, because I am ineligible for either. Publishing an article or book on this research is not a goal. I don't have a goal, other than to scramble my ideas of academic freedom a bit, think strange thoughts, and write strange sentences. That's not a "research project," because, as people who know me can testify, that's pretty much just my way of life.
I'm working on academic freedom basically for the same reason I started reading Hegel again (heavens help me), which is the same reason I start anything at all: to flirt.
I'm starting some deeper research on academic freedom. My early feeling is that most of what's discussed as academic freedom is missing a major point. A great deal of the discussion of academic freedom concerns political ideology, faction, public statements by professors met by official responses, and efforts by what we call neoconservatives to target academics and academic programs that they find offensive.
Here's the thing: when I read about Horowitz and Campus Watch and all those people trying to stop academics from criticizing US imperialism or the symbolic violence of compulsory heteronormativity, I think about my own ideas about such issues. They make up the idea of campus radicals in order to rile their mobs to attack socially critical academics. But I'm at least as radical as most of their prominent targets. Why don't they target me?
(I suppose this reveals that I'm a little envious of the Certified Academic Big Shots who are famous enough to matter to crazy people. Most of them make a lot more money than me.)
They don't target me because I don't exist. They don't target me because my stupid university barely exists. (As I've said before, I love my stupid university.) It's not the ideas that matter to them, it's the publicity, obviously, because they operate the same way terrorist groups do. The vast number of America's Colleges and Universities™are like my stupid university, in that we're like the water supply. If they wanted to kill the ideas, they'd attack the water supply. But they want to scare, so they attack the big buildings, which here is metaphorical for Certified Academic Big Shots.
Much much more on this to follow, I expect. For now, here's another idea about my own condition of academic license.
I am not starting this as a "research project." I have no "research projects," because my research does not exist: it has no meaning at my stupid university, and I have no place of prominence in my academic field, largely because of my non-ranking employment status. I have no measure for tenure or promotion to meet, because I am ineligible for either. Publishing an article or book on this research is not a goal. I don't have a goal, other than to scramble my ideas of academic freedom a bit, think strange thoughts, and write strange sentences. That's not a "research project," because, as people who know me can testify, that's pretty much just my way of life.
I'm working on academic freedom basically for the same reason I started reading Hegel again (heavens help me), which is the same reason I start anything at all: to flirt.
Wednesday, March 05, 2014
"an observer's attitude"
Susan Wendell, in her article "Feminism, Disability, and the Transcendence of the Body," discusses her strategies for living with chronic pain. "Living with" is already saying something about her experiences and strategies that may not fit. Somewhat contrary to what I wrote about embodiment and non-ownership, she says she adopts a stance of treating her pain as "a physical phenomenon to be endured until it is over and not taken seriously," which suggests a form of embodiment that induces a relation, a regard, and thus a separation of the living, conscious ego from one's body.
Wendell says her mood is improved when she can say to herself, "My body is painful (or nauseated, exhausted, etc.), but I'm happy." Her illness and pain lead to depression, for which she has a similar strategy. She says she enhances the quality of her life when she can say to herself, "My brain is badly affected right now, so I'm depressed, but I'm fine and my life is going well." Leaving aside the need to develop a fuller account of depression (not her task in the article), this suggests a state of mind and a form of experience in which one's own mood is separable from oneself, or at least from what she continues to call her "life." ("Life" may or may not mean "lived experience" in a phenomenological way.)
In sum, she says, most surprisingly, "I am learning not to identify myself with my body, and this helps me to live a good life with a debilitating chronic illness." This is surprising given the trajectory toward holistic embodiment models of consciousness and life in "continental" philosophy (which would appear to be Wendell's intellectual home turf).
This seems almost like a return to dualism, of the kind that allegedly dogged Husserl's first attempts toward transcendental phenomenological philosophy. That contintental philosophers keep returning to this theme suggests to me that there is a lot yet unthought about the basic move of transcendental egoism, and perhaps also still about Descartes' dualism. (I always wear my Hegel glasses when I think about this stuff: all dichotomies are false, and the truth is the whole.)
Wendell's strategies also complicate further the notion of one's "own" body or consciousness. I am totally unsure what to make of the way she displaces depression. This could be for personal reasons, namely that I experience depression as existential mood, and find it difficult to displace, and especially to say to myself, "I am depressed, but my life is good." To me, the phrase that follows naturally from "I am depressed" is "and therefore my objectively good life is crappy."
Wendell says her mood is improved when she can say to herself, "My body is painful (or nauseated, exhausted, etc.), but I'm happy." Her illness and pain lead to depression, for which she has a similar strategy. She says she enhances the quality of her life when she can say to herself, "My brain is badly affected right now, so I'm depressed, but I'm fine and my life is going well." Leaving aside the need to develop a fuller account of depression (not her task in the article), this suggests a state of mind and a form of experience in which one's own mood is separable from oneself, or at least from what she continues to call her "life." ("Life" may or may not mean "lived experience" in a phenomenological way.)
In sum, she says, most surprisingly, "I am learning not to identify myself with my body, and this helps me to live a good life with a debilitating chronic illness." This is surprising given the trajectory toward holistic embodiment models of consciousness and life in "continental" philosophy (which would appear to be Wendell's intellectual home turf).
This seems almost like a return to dualism, of the kind that allegedly dogged Husserl's first attempts toward transcendental phenomenological philosophy. That contintental philosophers keep returning to this theme suggests to me that there is a lot yet unthought about the basic move of transcendental egoism, and perhaps also still about Descartes' dualism. (I always wear my Hegel glasses when I think about this stuff: all dichotomies are false, and the truth is the whole.)
Wendell's strategies also complicate further the notion of one's "own" body or consciousness. I am totally unsure what to make of the way she displaces depression. This could be for personal reasons, namely that I experience depression as existential mood, and find it difficult to displace, and especially to say to myself, "I am depressed, but my life is good." To me, the phrase that follows naturally from "I am depressed" is "and therefore my objectively good life is crappy."
Labels:
but we need the eggs,
pain,
phenomenology,
philosophy
Tuesday, March 04, 2014
ownership
I suppose most people have had the experience of a word losing meaning after incessant repetition. Sometimes philosophy feels like deliberately inducing this experience.
One concept I struggle to understand is ownership, especially in relation to two related philosophical discussions: ownership of our bodies, and ownership of consciousness. (This has come up because I've just read an article with my Bioethics class about end-of-life decision making that raises the question whether we own our bodies.)
In everyday life, things do appear to me as mine. What I experience as most mine is what I pick up most often, what I touch, and what figures into my doing and dealing with the world. The nearer and more constant this touch, the more my own these things seem. Things are more or less mine. Almost nothing is more mine than the computers and keyboards I touch daily. Oddly, the guitar I touch daily is less mine. This is because it resists in ways the computers don't. The more mine something is, the more accessible it is to my touch, and the more I take it up into an overall movement, without resistance. The same thing can be more or less mine over a brief time span. My bicycle, most mine as I crank at high speed and blow through stop signs, can instantly be less mine when the brakes fail to respond or the gear slips.
I experience ownership of these things, through their intimacy, but also through their difference from me. As familiar as it is, the keyboard still is not my fingertips, but belongs to my fingertips. As fluidly as playing the guitar sometimes is, the guitar is always present in relation to my fingers and ears and eyes, etc. (Unlike Merleau-Ponty's famous blind-man's stick, these things I own are not extensions of my body, not appropriated into embodiment.)
So, owning my body strikes me as strangely distancing. Even when I touch my body, I don't touch my body the same way I touch things, and it is not, for me, accessible, near, nor even intimate. There is a divisibility of time and space in the relation of ownership that is not present in my embodiment. This may be a badly strained analogy, but I'll go with it anyway: if ownership is like time, embodiment is like eternality. (And for now I'll sidestep the question of embodiment sub specie aeternitatis.)
Even when my body is objectified and obtrusive, in pain or disability, I say "mine" about my body metaphorically or by extension from the way I say the guitar is mine. I say "my feet hurt," but my feet are not in relation to me the way my bicycle is. I don't feel that I approach the world through my feet or my hands, or walk or touch with them. They are my walking or touching, and in condition of pain or disability, they are the pain-and-walking or the unable-and-touching.
Still stranger to me is the notion of consciousness as ownership. In Husserl's account of the phenomenological reduction, the Ego appears, and with it, the Ego's "own" experience. When I first read this, I was stumped by it, and I still am. Husserl seems to need the Ego and its experience to coexist in this little copula, "own," in order to find a way toward a transcendental ego. Many phenomenological philosophers would be gravely concerned by the notion that the transcendental ego just is experience, and I'm not sure I would advance that proposition (at least, in public), but that would be the parallel construction to the above notion of being the body.
One concept I struggle to understand is ownership, especially in relation to two related philosophical discussions: ownership of our bodies, and ownership of consciousness. (This has come up because I've just read an article with my Bioethics class about end-of-life decision making that raises the question whether we own our bodies.)
In everyday life, things do appear to me as mine. What I experience as most mine is what I pick up most often, what I touch, and what figures into my doing and dealing with the world. The nearer and more constant this touch, the more my own these things seem. Things are more or less mine. Almost nothing is more mine than the computers and keyboards I touch daily. Oddly, the guitar I touch daily is less mine. This is because it resists in ways the computers don't. The more mine something is, the more accessible it is to my touch, and the more I take it up into an overall movement, without resistance. The same thing can be more or less mine over a brief time span. My bicycle, most mine as I crank at high speed and blow through stop signs, can instantly be less mine when the brakes fail to respond or the gear slips.
I experience ownership of these things, through their intimacy, but also through their difference from me. As familiar as it is, the keyboard still is not my fingertips, but belongs to my fingertips. As fluidly as playing the guitar sometimes is, the guitar is always present in relation to my fingers and ears and eyes, etc. (Unlike Merleau-Ponty's famous blind-man's stick, these things I own are not extensions of my body, not appropriated into embodiment.)
So, owning my body strikes me as strangely distancing. Even when I touch my body, I don't touch my body the same way I touch things, and it is not, for me, accessible, near, nor even intimate. There is a divisibility of time and space in the relation of ownership that is not present in my embodiment. This may be a badly strained analogy, but I'll go with it anyway: if ownership is like time, embodiment is like eternality. (And for now I'll sidestep the question of embodiment sub specie aeternitatis.)
Even when my body is objectified and obtrusive, in pain or disability, I say "mine" about my body metaphorically or by extension from the way I say the guitar is mine. I say "my feet hurt," but my feet are not in relation to me the way my bicycle is. I don't feel that I approach the world through my feet or my hands, or walk or touch with them. They are my walking or touching, and in condition of pain or disability, they are the pain-and-walking or the unable-and-touching.
Still stranger to me is the notion of consciousness as ownership. In Husserl's account of the phenomenological reduction, the Ego appears, and with it, the Ego's "own" experience. When I first read this, I was stumped by it, and I still am. Husserl seems to need the Ego and its experience to coexist in this little copula, "own," in order to find a way toward a transcendental ego. Many phenomenological philosophers would be gravely concerned by the notion that the transcendental ego just is experience, and I'm not sure I would advance that proposition (at least, in public), but that would be the parallel construction to the above notion of being the body.
Monday, February 24, 2014
the mission of the university
I've written some things about university education that could seem fairly cold-hearted, critical, or even cynical. I have wanted to write a paper to submit to an upcoming conference that would take the form of a prospectus for potential shareholders in a university based on the principles of advertising and information analyzed by Jean Baudrillard. I'm out of time to write it in a way consistent with my long-term health and well-being.
I'm grading papers and attending meetings about curriculum, instead. What I learned from one recent meeting is that, no matter how cynical the tone in my satires, I could never hope to match the cynicism of some actual university administrators. Quoting liberally from their universities' mission statements, some actual university administrators manage to bankrupt all meaning from any concept pertaining to the work universities do, while speaking of the pursuit of various metrics of this same work as the key value universities ought to have. (I should note in passing that interpreting some of what actual university administrators say about university education as cynical ought to strain us, because parsimony demands the simpler explanation that some actual university administrators are unable to comprehend what it is universities do. Calling it cynical suggests that these administrators are people who know that they are paid to say that they care about education.)
Gentle reader, you may be relieved to find that this post is not at all cynical.
Today, after another meeting about curriculum, and after a woman in a red Cadillac tried three times to run into me and my bike at the same intersection on the way home, I was thinking about how something like the university's mission is reflected in my actual, you know, work.
I graded four papers from a class of 30 just before the meeting. One of them was good, followed the prompt, and generally explained the ethical problem and the two articles I asked the class to write about. It was a B. One of them was fair, said what the two articles were about, but didn't really address the prompt or the ethical problem. It was a C. Two of them were basically incomprehensible because of poor English grammar, mechanics, syntax, word choice, and poor comprehension of course material, and failure to follow instructions. The proper score for each of these two papers would be F-. The students in the class are juniors or seniors, meaning they have already supposedly successfully completed two years of college work.
This tells me something about our university's mission. We have students who are functionally illiterate in at least the English language, and we have students who are capable of what I consider college-level work. In most of my classes, the ratio is one student who cannot do college work for every three who can. Our university's mission is to serve these students, all of them, because all of them meet admission standards at this public four-year comprehensive university, being among the top third of their graduating classes or having met admissions requirements for community college transfers.
We most often speak and think about the university's mission in terms of imparting knowledge, preparing students for careers, and for life, but with the narrow and fixated focus on particular outcomes -- graduation being the most important, and most commonly cited. It's a discourse obsessed with winning and losing -- with the university winning and losing -- and each student is one more ball game in the never ending season.
Now, that really is cynical, keeping score by counting students who graduate and "succeed." When I grade papers with a mindset like that, I get more frustrated and angry with every paper that's hopelessly off-topic, ungrammatical, and incoherent, because every paper like that is another loss in my record.
What I think I want to know about the university's mission, and about my students, is what good we can do for these people who come here and take our classes. Win or lose.
I'm grading papers and attending meetings about curriculum, instead. What I learned from one recent meeting is that, no matter how cynical the tone in my satires, I could never hope to match the cynicism of some actual university administrators. Quoting liberally from their universities' mission statements, some actual university administrators manage to bankrupt all meaning from any concept pertaining to the work universities do, while speaking of the pursuit of various metrics of this same work as the key value universities ought to have. (I should note in passing that interpreting some of what actual university administrators say about university education as cynical ought to strain us, because parsimony demands the simpler explanation that some actual university administrators are unable to comprehend what it is universities do. Calling it cynical suggests that these administrators are people who know that they are paid to say that they care about education.)
Gentle reader, you may be relieved to find that this post is not at all cynical.
Today, after another meeting about curriculum, and after a woman in a red Cadillac tried three times to run into me and my bike at the same intersection on the way home, I was thinking about how something like the university's mission is reflected in my actual, you know, work.
I graded four papers from a class of 30 just before the meeting. One of them was good, followed the prompt, and generally explained the ethical problem and the two articles I asked the class to write about. It was a B. One of them was fair, said what the two articles were about, but didn't really address the prompt or the ethical problem. It was a C. Two of them were basically incomprehensible because of poor English grammar, mechanics, syntax, word choice, and poor comprehension of course material, and failure to follow instructions. The proper score for each of these two papers would be F-. The students in the class are juniors or seniors, meaning they have already supposedly successfully completed two years of college work.
This tells me something about our university's mission. We have students who are functionally illiterate in at least the English language, and we have students who are capable of what I consider college-level work. In most of my classes, the ratio is one student who cannot do college work for every three who can. Our university's mission is to serve these students, all of them, because all of them meet admission standards at this public four-year comprehensive university, being among the top third of their graduating classes or having met admissions requirements for community college transfers.
We most often speak and think about the university's mission in terms of imparting knowledge, preparing students for careers, and for life, but with the narrow and fixated focus on particular outcomes -- graduation being the most important, and most commonly cited. It's a discourse obsessed with winning and losing -- with the university winning and losing -- and each student is one more ball game in the never ending season.
Now, that really is cynical, keeping score by counting students who graduate and "succeed." When I grade papers with a mindset like that, I get more frustrated and angry with every paper that's hopelessly off-topic, ungrammatical, and incoherent, because every paper like that is another loss in my record.
What I think I want to know about the university's mission, and about my students, is what good we can do for these people who come here and take our classes. Win or lose.
Thursday, February 20, 2014
of termites, reputation, and character
We have drywood termites.
We called in the pest company that wrote the certification prior to our buying this house. The inspector came back out, identified the termites, said he was sorry to bear bad news, and that there had been no visible evidence of them during his earlier inspection, so we'd have to pay to fumigate. He offered a deal to us to fumigate at cost.
I said we didn't feel like this was our problem, since we relied on a certification his company wrote. We didn't buy termites. He said that in his opinion, we wouldn't be able to demonstrate in court that there had been evidence that was ignored, and anyway, that it would have made no sense for him to fail to note the termites, since he would make money on reporting them.
We contacted our realtor, who made some inquiries. The next day she called to tell us the seller and the pest inspector would cover the cost of fumigation. She opined that they wanted to protect their very good reputations. Indeed, when my Loveliest reported the termites to some friends, they immediately asked who the inspection company was, and were impressed to hear they had agreed to cover the cost.
The inspector and seller will have thus preserved their reputations. We won't have to pay for fumigation, but we will have to move out for two days, with our cats and turtle, and move all our food out of the house.
I am a suspicious person, so I did not trust anything but the offer of free fumigation (and only really barely trust that, in fact). Knowing the history of philosophy also makes me inclined to focus on the difference between reputation and character. Reading Plato will do that to ya. And I have just been reading the Apology with my intro class.
Around here, in my experience, businesspeople's (well, really, businessmen's) reputations are built on their stated commitment to Christian values. In my experience, too, this is pretty cheap talk -- never mind that a reputation for being a Christian property investor or Christian pest inspector makes as much sense as a reputation for being a Buddhist journalist or Shinto mechanic.
Have either the seller or the inspector demonstrated anything about their characters? This is a basic problem in the way Plato wrote about this issue. Someone concerned entirely about reputation, who does not give a damn in his or her soul of souls, could still be powerfully motivated to do what looks objectively to be the right thing, for reasons having nothing at all to do with ethics.
Don't get me wrong: I'll take the fumigation. I'll even report on service review websites that they provided it. I don't think I'll say anything about their characters.
We called in the pest company that wrote the certification prior to our buying this house. The inspector came back out, identified the termites, said he was sorry to bear bad news, and that there had been no visible evidence of them during his earlier inspection, so we'd have to pay to fumigate. He offered a deal to us to fumigate at cost.
I said we didn't feel like this was our problem, since we relied on a certification his company wrote. We didn't buy termites. He said that in his opinion, we wouldn't be able to demonstrate in court that there had been evidence that was ignored, and anyway, that it would have made no sense for him to fail to note the termites, since he would make money on reporting them.
We contacted our realtor, who made some inquiries. The next day she called to tell us the seller and the pest inspector would cover the cost of fumigation. She opined that they wanted to protect their very good reputations. Indeed, when my Loveliest reported the termites to some friends, they immediately asked who the inspection company was, and were impressed to hear they had agreed to cover the cost.
The inspector and seller will have thus preserved their reputations. We won't have to pay for fumigation, but we will have to move out for two days, with our cats and turtle, and move all our food out of the house.
I am a suspicious person, so I did not trust anything but the offer of free fumigation (and only really barely trust that, in fact). Knowing the history of philosophy also makes me inclined to focus on the difference between reputation and character. Reading Plato will do that to ya. And I have just been reading the Apology with my intro class.
Around here, in my experience, businesspeople's (well, really, businessmen's) reputations are built on their stated commitment to Christian values. In my experience, too, this is pretty cheap talk -- never mind that a reputation for being a Christian property investor or Christian pest inspector makes as much sense as a reputation for being a Buddhist journalist or Shinto mechanic.
Have either the seller or the inspector demonstrated anything about their characters? This is a basic problem in the way Plato wrote about this issue. Someone concerned entirely about reputation, who does not give a damn in his or her soul of souls, could still be powerfully motivated to do what looks objectively to be the right thing, for reasons having nothing at all to do with ethics.
Don't get me wrong: I'll take the fumigation. I'll even report on service review websites that they provided it. I don't think I'll say anything about their characters.
Friday, February 14, 2014
philosophical habits of mind
A grad school professor of ours used to compare himself to an extraordinarily widely published friend of his, by way of Isaiah Berlin's distinction of intellectual foxes and hedgehogs. "Joe's a fox," he would tell us, "and I'm a hedgehog." The fox had a quick wit, and always seemed to grasp intuitively and immediately the scope and significance of any philosophical discussion. He responded brilliantly to questions. The hedgehog was plodding and specialist in a small area of philosophy to which he devoted years of study, ultimately to formulate one or two nearly dogmatic assertions.
I always thought our professor implied that the more properly philosophical approach was his -- the hedgehog's. What he told me about philosophical study over the years perpetually returned to the theme of focused study on one area, or at most three great philosophers. About these, the hedgehog admonished us to read practically everything published. He was a model of constancy and determination.
But I admired the fox and had a natural affinity for him. (The fox had charm that the hedgehog lacked, and was also nicer.) He seemed to be aware of every trend in academic philosophy, as well as being in the vanguard of a few. He entered any debate with goodwill and heart, and apparently without a shibboleth he felt the need to protect. I have a vague memory of him at an academic conference, mid-debate with an adamant, opposed interlocutor, suddenly shrugging and saying, "oh, yes, you're right, and I'm completely wrong about that."
In fact, I thought that the fox was more truly philosophical. The hedgehog was a scholar, practically a monk. He seemed not only to think more slowly, less broadly, but less freely. This could also make the hedgehog appear less intelligent, certainly less bright.
I know, therefore, that my bias regarding the necessity of philosophical intelligence is that it model the fox's quickness and brightness, rather than the hedgehog's diligence and tenacity.
I always thought our professor implied that the more properly philosophical approach was his -- the hedgehog's. What he told me about philosophical study over the years perpetually returned to the theme of focused study on one area, or at most three great philosophers. About these, the hedgehog admonished us to read practically everything published. He was a model of constancy and determination.
But I admired the fox and had a natural affinity for him. (The fox had charm that the hedgehog lacked, and was also nicer.) He seemed to be aware of every trend in academic philosophy, as well as being in the vanguard of a few. He entered any debate with goodwill and heart, and apparently without a shibboleth he felt the need to protect. I have a vague memory of him at an academic conference, mid-debate with an adamant, opposed interlocutor, suddenly shrugging and saying, "oh, yes, you're right, and I'm completely wrong about that."
In fact, I thought that the fox was more truly philosophical. The hedgehog was a scholar, practically a monk. He seemed not only to think more slowly, less broadly, but less freely. This could also make the hedgehog appear less intelligent, certainly less bright.
I know, therefore, that my bias regarding the necessity of philosophical intelligence is that it model the fox's quickness and brightness, rather than the hedgehog's diligence and tenacity.
Monday, January 27, 2014
what are the requirements of being a philosopher?
[No comment on my lengthy sabbatical from writing in this space.]
About a month ago, I started to ask myself whether someone has to be "smart" to be a philosopher. The canon of the history of western philosophy is peopled entirely by smart people (okay, except for Kant). But a philosopher is not just a smart person, obviously, and the kinds of smartness philosophers exhibit seem like they have a particularity to them that you don't necessarily find among other people, smart or not.
I know lots of really smart people, lots of people with doctoral degrees who do scientific research or academic scholarship, and teach at universities. The way philosophers are smart seems different to me than the way other people are smart. Others notice this too, or seem to, whenever they raise eyebrows at the kinds of questions philosophers raise. How much of this is the smartness, and how much if it is the particularly forms of reflection philosophers are prone to?
Here's a first hypothesis. There are pretty obvious cultural and ethnic attributes exhibited by philosophers trained in the western canonical tradition, and those both favor and contribute to the development of a certain kind of smartness. So, the relation between smartness and philosophy is at least partly culture-bound, and not necessarily essential to philosophy as such.
If we strip away the culture-bound aspects, would there still be a smartness pertinent to philosophy as such?
About a month ago, I started to ask myself whether someone has to be "smart" to be a philosopher. The canon of the history of western philosophy is peopled entirely by smart people (okay, except for Kant). But a philosopher is not just a smart person, obviously, and the kinds of smartness philosophers exhibit seem like they have a particularity to them that you don't necessarily find among other people, smart or not.
I know lots of really smart people, lots of people with doctoral degrees who do scientific research or academic scholarship, and teach at universities. The way philosophers are smart seems different to me than the way other people are smart. Others notice this too, or seem to, whenever they raise eyebrows at the kinds of questions philosophers raise. How much of this is the smartness, and how much if it is the particularly forms of reflection philosophers are prone to?
Here's a first hypothesis. There are pretty obvious cultural and ethnic attributes exhibited by philosophers trained in the western canonical tradition, and those both favor and contribute to the development of a certain kind of smartness. So, the relation between smartness and philosophy is at least partly culture-bound, and not necessarily essential to philosophy as such.
If we strip away the culture-bound aspects, would there still be a smartness pertinent to philosophy as such?
Monday, September 02, 2013
morality, justice, and suffering
I'm reading Martha Nussbaum's book Frontiers of Justice, which is pretty good. She criticizes the worthy and prominent theory of justice of John Rawls, which draws from a long tradition in philosophy of looking at justice as if societies were formed through contractual agreements. The basic idea is that we can understand justice by imagining that societies are supposed to be mutually beneficial to all who would choose to join them. Rawls' theory of justice is probably the most robust and interesting version of the social contract, as Nussbaum argues, because it includes a moral concept: we would have to consider these contractual agreements under a "veil of ignorance" preventing us from knowing how to rig the contract to our own personal advantage. So, the terms of the deal would have to be such that anyone could be benefitted, not just oneself. Egoism is not possible in this scheme.
Cool, but not cool enough, Nussbaum says, because it has a limited view of human life, and a limited view of how the social contract would affect those who don't get to negotiate its terms because they aren't "normal" in their capacities for reason. Nussbaum goes a different way, saying that the contractarian idea has to be thought of in terms of human capabilities that are basic to human dignity. In other words, instead of self-interested negotiation, we should think about social justice by asking whether a society provides for each and every member the means and opportunity to live decent, dignified human lives. She lists 10 capabilities that are essential to dignified lives, and gives very general definitions of each. (I'm not going to go into these. My favorite is the capability to play.)
Very cool, but I'm not convinced by one thing Nussbaum does. She makes the case well for using capabilities instead of rights as a way to think about justice. She also argues against using suffering as a way to think about justice. She does so, in part, because capabilities are more fully representative of human dignity, but I think also because it's more positive. She also says that suffering is too minimal a standard, and reduces suffering to sentience, meaning something like the capacity to be aware of harm, injury and harmful, injurious conditions.
Through roundabout associations as I was reading this morning (arguments about the moral wrongness of lying, leading to considering how odd it is that lying is rejected not only tout court but tout suite by principlist moral philosophy, considering its such a fundamental kind of behavior, leading to considering a statement made by an erstwhile pal of mine that he would much rather be lied to compassionately than told the truth righteously), I started to consider whether suffering could be the basis for a theory of morality or justice. I don't think suffering is taken very seriously in Western philosophy, neither in general, nor as a basis for understanding morality and justice. But maybe it should.
First of all, suffering is universal, and I think it could be argued that it is more universal than rights or capabilities. A suffering-based theory would not have to justify why "human dignity" is the right standard, nor define dignity, though it would have to articulate and justify the standard of suffering itself (i.e., how much is acceptable, maybe also from what, etc.)
One reason suffering isn't taken seriously, ironically enough, is that it is universal: our intuition is that animals too suffer, and a suffering approach, some might say, begs the question whether animal suffering ought to be important, or whether human life, morality, and justice ought to be weighed in terms of something that non-humans are also subject to. Nussbaum wouldn't want to accept these claims, really, since she's also interested in understanding our relation to non-human animals in terms of justice. But it is clear that Nussbaum's dismissal of suffering is too quick. I say she reduces it to sentience, to mere sentience, and that this ignores the dimension and texture of suffering. Though universal, suffering happens to us in every way we connect to the world, and in the same depth. Suffering is different for different beings, varying in one way because we have different learned capacities for connecting to the world: some of us can suffer aesthetically in ways others of us don't, or at least not as much.
I'm pretty sure a phenomenology of embodiment would provide some key insights for an account of morality or justice on the basis of suffering -- in fact, I know a few people have worked on this. There's also some obvious, if superficial, analogies to Buddhist ideas. In any case, it's a thought I've had in the back of my mind for a long time, and reading Nussbaum has helped me see more clearly why it's appealing to me.
Cool, but not cool enough, Nussbaum says, because it has a limited view of human life, and a limited view of how the social contract would affect those who don't get to negotiate its terms because they aren't "normal" in their capacities for reason. Nussbaum goes a different way, saying that the contractarian idea has to be thought of in terms of human capabilities that are basic to human dignity. In other words, instead of self-interested negotiation, we should think about social justice by asking whether a society provides for each and every member the means and opportunity to live decent, dignified human lives. She lists 10 capabilities that are essential to dignified lives, and gives very general definitions of each. (I'm not going to go into these. My favorite is the capability to play.)
Very cool, but I'm not convinced by one thing Nussbaum does. She makes the case well for using capabilities instead of rights as a way to think about justice. She also argues against using suffering as a way to think about justice. She does so, in part, because capabilities are more fully representative of human dignity, but I think also because it's more positive. She also says that suffering is too minimal a standard, and reduces suffering to sentience, meaning something like the capacity to be aware of harm, injury and harmful, injurious conditions.
Through roundabout associations as I was reading this morning (arguments about the moral wrongness of lying, leading to considering how odd it is that lying is rejected not only tout court but tout suite by principlist moral philosophy, considering its such a fundamental kind of behavior, leading to considering a statement made by an erstwhile pal of mine that he would much rather be lied to compassionately than told the truth righteously), I started to consider whether suffering could be the basis for a theory of morality or justice. I don't think suffering is taken very seriously in Western philosophy, neither in general, nor as a basis for understanding morality and justice. But maybe it should.
First of all, suffering is universal, and I think it could be argued that it is more universal than rights or capabilities. A suffering-based theory would not have to justify why "human dignity" is the right standard, nor define dignity, though it would have to articulate and justify the standard of suffering itself (i.e., how much is acceptable, maybe also from what, etc.)
One reason suffering isn't taken seriously, ironically enough, is that it is universal: our intuition is that animals too suffer, and a suffering approach, some might say, begs the question whether animal suffering ought to be important, or whether human life, morality, and justice ought to be weighed in terms of something that non-humans are also subject to. Nussbaum wouldn't want to accept these claims, really, since she's also interested in understanding our relation to non-human animals in terms of justice. But it is clear that Nussbaum's dismissal of suffering is too quick. I say she reduces it to sentience, to mere sentience, and that this ignores the dimension and texture of suffering. Though universal, suffering happens to us in every way we connect to the world, and in the same depth. Suffering is different for different beings, varying in one way because we have different learned capacities for connecting to the world: some of us can suffer aesthetically in ways others of us don't, or at least not as much.
I'm pretty sure a phenomenology of embodiment would provide some key insights for an account of morality or justice on the basis of suffering -- in fact, I know a few people have worked on this. There's also some obvious, if superficial, analogies to Buddhist ideas. In any case, it's a thought I've had in the back of my mind for a long time, and reading Nussbaum has helped me see more clearly why it's appealing to me.
Tuesday, August 27, 2013
faculty moral responsibility for education fraud
Between 1999 and 2011, student loan debt increased 511%. College graduate unemployment is a little under 9%. The largest single employment sector in the US economy is retail sales. The largest sector of employment growth in the last two years is in temporary, low-skilled work.
The knowledge-based and expertise-based legitimations of college education are long dead. College degrees as credentials for entry into information-processing jobs are nearly dead. There is some reason to think college education provides relevant training that can be useful in various careers -- largely indirectly, through the development of "hidden curriculum" skills and attributes like perseverance, rule-following, mastering encrypted forms of communication like academic prose, etc. But these careers have lost a lot of their prestige and power, and are losing stability and security rapidly.
Under these conditions, getting a college education has to appear much less like a shrewd investment, and more like an expensive gamble. The basic economic function of colleges and universities -- non-profit and "public" as well as private and for-profit -- is to transfer wealth from poor laboring classes to rich capitalists who leech from the system at every pore. (Contemporary capitalism is called by several colorful names: disaster capitalism, predatory capitalism, casino capitalism. I think I like parasite capitalism.)
At some point, I imagine, the economic behavior of people will change to reflect this, and people will stop going to college. I fantasize how people might hold higher education to account for this economic arrangement, and for what could be called fraud.
What is my moral responsibility for this, as a college faculty member, given that I benefit (though modestly, especially compared to parasite capitalists)? Should I discourage people from going to college, despite the potential ramifications to my gainful employment? Should I try to show this perspective to current students, despite the potential ramifications to the teacher-student relationship? Can I "teach" a class, without excessive irony, after I have exposed this arrangement?
Let's see.
The knowledge-based and expertise-based legitimations of college education are long dead. College degrees as credentials for entry into information-processing jobs are nearly dead. There is some reason to think college education provides relevant training that can be useful in various careers -- largely indirectly, through the development of "hidden curriculum" skills and attributes like perseverance, rule-following, mastering encrypted forms of communication like academic prose, etc. But these careers have lost a lot of their prestige and power, and are losing stability and security rapidly.
Under these conditions, getting a college education has to appear much less like a shrewd investment, and more like an expensive gamble. The basic economic function of colleges and universities -- non-profit and "public" as well as private and for-profit -- is to transfer wealth from poor laboring classes to rich capitalists who leech from the system at every pore. (Contemporary capitalism is called by several colorful names: disaster capitalism, predatory capitalism, casino capitalism. I think I like parasite capitalism.)
At some point, I imagine, the economic behavior of people will change to reflect this, and people will stop going to college. I fantasize how people might hold higher education to account for this economic arrangement, and for what could be called fraud.
What is my moral responsibility for this, as a college faculty member, given that I benefit (though modestly, especially compared to parasite capitalists)? Should I discourage people from going to college, despite the potential ramifications to my gainful employment? Should I try to show this perspective to current students, despite the potential ramifications to the teacher-student relationship? Can I "teach" a class, without excessive irony, after I have exposed this arrangement?
Let's see.
Wednesday, August 21, 2013
what legitimates shared governance?
In most colleges and universities there is a structure called shared governance. Through this structure, the institution sets policy and makes certain decisions about academic programs, personnel, and other closely related matters. Beyond that very general overview, really nothing can be said about shared governance that applies to all colleges and universities. Shared governance apparatus and the capacity of those apparatus to foster genuinely shared genuine governance range widely.
From the perspective of faculty, shared governance ought to serve the faculty in shaping and recommending policy to the administration. Many statements about shared governance emphasize this by saying that the administration should follow policy recommendations duly approved by academic senates, and give compelling reasons when they do not.
Why should faculty have this authority? One answer, with a long tradition, is that faculty are experts in their fields, and therefore have the legitimate claim over directing the academic policies of their institutions. This is a claim about professional knowledge, judgment, and status, and is a common feature of every profession's assertion of self-regulatory authority. Since only medical doctors can make knowledgeable judgments of the work of medical doctors, medical doctors should have that authority; since only chemists can determine whether chemists are doing their work properly, chemists should regulate their own work.
Over the last 40 years or so, this authority has eroded, for every profession, as corporatization, privatization, and bureaucratization have taken over in formerly public-serving fields. Shared governance is a slow process; predatory capitalism can't abide this.*
The question is, what would make it seem reasonable to deny that doctors should have the authority and responsibility to determine what doctors should do? Why on earth would the regulation of doctors fall to people with financial spreadsheets? Similarly, why would the determination of academic policy fall to such people, many of whom are absolutely unable to talk about academic policy in any terms other than cash?
I am certain this is partly the result of the delegitimation of claims to expert knowledge. The authority of doctors, chemists, philosophers, or anyone else have become suspect. Expertise is now the function of computer programs, and the reduction of all values to money is an unquestionable ideology.
Under those real conditions, what could legitimate shared governance? My answer comes from the underclass of the academic profession, the permanently temporary, "contingent," or, as I prefer, the tenuous-track faculty. This super-majority of faculty (more than 75% of all college and university faculty) have been excluded from shared governance all along, and are only now getting some voice.§ The tenuous-track faculty's claim to a part of shared governance does not primarily rely on expert knowledge, in my opinion. Our expertise is doubted by many faculty, and almost all administrators, so such a claim would fail. Instead, we rely on a simpler, earthier, and more fundamental set of claims.
1. Labor. Tenuous-track faculty do the majority -- the vast majority -- of teaching work; therefore, tenuous-track faculty deserve a share in governance. The principle of justice here is a kind of proportionality: those who do most work have most at stake.
2. Civil and human rights. Tenuous-track faculty are people, actual real human beings, and as people deserve a share in governance. This is a liberal-democratic claim, that individual human beings have the right to self-determination and participation in social institutions.
3. Expertise. And by the way, yes, we are experts, thank you. We may lack full credentials in some cases, and we lack privilege and prestige, but we still have expert knowledge. There is a subtext to this: if shared governance is denied to those who do the work that recognized experts do, then the institutional power of recognized experts looks much more like mere privilege.
--
* Allegedly because of "competition," but of course the real reason all institutional change has to be rapid and dramatic is to perpetuate crisis, stun people, and create opportunities for seizing still more power).
§ I don't think it's an accident that this comes when shared governance is losing clout.
From the perspective of faculty, shared governance ought to serve the faculty in shaping and recommending policy to the administration. Many statements about shared governance emphasize this by saying that the administration should follow policy recommendations duly approved by academic senates, and give compelling reasons when they do not.
Why should faculty have this authority? One answer, with a long tradition, is that faculty are experts in their fields, and therefore have the legitimate claim over directing the academic policies of their institutions. This is a claim about professional knowledge, judgment, and status, and is a common feature of every profession's assertion of self-regulatory authority. Since only medical doctors can make knowledgeable judgments of the work of medical doctors, medical doctors should have that authority; since only chemists can determine whether chemists are doing their work properly, chemists should regulate their own work.
Over the last 40 years or so, this authority has eroded, for every profession, as corporatization, privatization, and bureaucratization have taken over in formerly public-serving fields. Shared governance is a slow process; predatory capitalism can't abide this.
The question is, what would make it seem reasonable to deny that doctors should have the authority and responsibility to determine what doctors should do? Why on earth would the regulation of doctors fall to people with financial spreadsheets? Similarly, why would the determination of academic policy fall to such people, many of whom are absolutely unable to talk about academic policy in any terms other than cash?
I am certain this is partly the result of the delegitimation of claims to expert knowledge. The authority of doctors, chemists, philosophers, or anyone else have become suspect. Expertise is now the function of computer programs, and the reduction of all values to money is an unquestionable ideology.
Under those real conditions, what could legitimate shared governance? My answer comes from the underclass of the academic profession, the permanently temporary, "contingent," or, as I prefer, the tenuous-track faculty. This super-majority of faculty (more than 75% of all college and university faculty) have been excluded from shared governance all along, and are only now getting some voice.
1. Labor. Tenuous-track faculty do the majority -- the vast majority -- of teaching work; therefore, tenuous-track faculty deserve a share in governance. The principle of justice here is a kind of proportionality: those who do most work have most at stake.
2. Civil and human rights. Tenuous-track faculty are people, actual real human beings, and as people deserve a share in governance. This is a liberal-democratic claim, that individual human beings have the right to self-determination and participation in social institutions.
3. Expertise. And by the way, yes, we are experts, thank you. We may lack full credentials in some cases, and we lack privilege and prestige, but we still have expert knowledge. There is a subtext to this: if shared governance is denied to those who do the work that recognized experts do, then the institutional power of recognized experts looks much more like mere privilege.
--
* Allegedly because of "competition," but of course the real reason all institutional change has to be rapid and dramatic is to perpetuate crisis, stun people, and create opportunities for seizing still more power).
§ I don't think it's an accident that this comes when shared governance is losing clout.
Wednesday, August 14, 2013
the moral dilemma of collecting unemployment
I received my new three-year appointment, and signed and brought it to campus today. (I had requested a bump up to Range C, since I've been in the same salary range for 14 years. As soon as I received an evaluation letter saying they would try to make this happen, I knew it wouldn't. But that's another story.)
In between contracts, for the first time, I collected unemployment, to which I was entitled under California law. I expected to feel funny about that, because I can make ends meet, and there are plenty of people who can't. On the other hand, there's a reason to collect beyond my own condition. Lecturers in the CSU apply for unemployment partly as a political move to raise the cost to the administration of keeping lecturers in precarious employment status -- since the law stipulates we're eligible because we're in temporary employment that ends without any reasonable assurance of future work. My collecting unemployment supposedly has some effect on incrementally pushing for better working conditions for all lecturers.
But that wasn't my moral dilemma at all, as it turned out. It was Optima.
Optima is one of Hermann Zapf's two masterpieces -- the other being Palatino -- and, if not my favorite fonts, certainly one of the five. It's also the font of choice for the California unemployment agency, printed in that displeasing blue government bureaucracies always manage to put on everything, and that somehow always looks faded. All the pamphlets explaining how to be unemployed, how to try to stop being unemployed, and what to do to avoid losing unemployment benefits were covered in it. It's on their envelopes. It's on their logos. In that context, this perfectly weighted, ambiguously quasi-serifed work of art looks like -- well, like something sent to you from the unemployment office.
There's not a lot I can do. I had to change my fonts on this blog. I'm going to have to remove it from the course syllabi for which it is the basis of the style sheet.
So, now what? I already use Palatino for Bioethics. Professional Ethics uses Futura for headings and Goudy Old Style for text -- a devilish combination that works despite itself, and for which I take justifiable pride. The course is already laboring under the unwieldy title "Human Interests and the Power of Information." What am I supposed to do -- Avant Garde Gothic headings? That way lay madness.
Monday, August 05, 2013
philosophical problems
I suppose my post yesterday could have suggested I agree with whoever it was who said that life would never have posed any philosophical problems -- meaning that the tradition of Western philosophy is a history of self-invented puzzles and linguistic foibles. (I think it was G.E. Moore.) I don't think I mean that. It depends.
(1) If we stick to the strict language of philosophical problems, and consider philosophy to be a search for solutions, then I do mean that. Life's problems are not philosophical. I don't think philosophy is a solution-machine, either.
(2) If we broaden the terms, and say ask whether life poses philosophical questions, then I think it does. And I think this is where philosophy is at home, answering, and wondering, in response to questions.
I'll illustrate this with a brief look at a motivating moment in Plato's Republic. The passage I have in mind is when Socrates responds to Thrasymachus' claim that justice is the advantage of the stronger. Thrasymachus is not clear about this, his position ends up being incoherent, but what it amounts to is the view that the best life is one spent using power and wealth to acquire more power and wealth, and to hell with everybody else. Socrates demonstrates many problems with this position (for instance, what happens when another tyrant comes along, or when the people you depend on to produce the wealth you steal become totally corrupt or die out), but Thrasymachus doesn't care. In this, Thrasymachus is consistent. The ethical, political, and practical faults in his position don't matter to him. One imagines that, when these arise, he'll be happy to smash opponents, buy new slaves,build an arsenal of drones, secure his southern border, make war for oil, etc.
Socrates does not effectively refute him, in my view, because the argument doesn't continue. Glaucon and Adeimantus take it up, clarify it and try to make it coherent,1 and demand Socrates to show how it could possibly be better to have the reputation for total vice and be punished and persecuted for it despite being good, than to have the reputation for goodness and be rewarded and praised for it despite being wicked.2
Two ways to look at this. One is as an allegedly philosophical problem: the problem of how to get people to be good, or of how to be good, or of how to have a good life. In my view, Socrates necessarily fails to solve this problem, because it's not a problem that can be solved by philosophy. How do I know? Big hint: Thrasymachus has left the building! Socrates talks up Glaucon and Adeimantus, while the problem, Thrasymachus, is off gleefullybeating people up and stealing their money doing high-finance deals beating people up and stealing their money.
The other way to look at it is as a philosophical question: the question of the meaning of virtue and vice, the meaning of a good life, and of why people like Thrasymachus seem so happy when the rest of us poor slobs aren't. Now we're talking -- literally, since that's exactly what they do. And they have a good time, and they don't hurt anybody while doing it.
Life poses philosophical questions (or perhaps this is better phrased as opportunities for philosophical questioning) all over the place, all the time. The question of the good can pop up with the toast out of the toaster. Which is great for us, because it offers consolation when people like Thrasymachus beat us up and steal our money.
--
1. Mistake #1. Thrasymachus' position is most accurately put incoherently, because he doesn't give a shit about listening to reason. Having power means you don't have to listen to reason. So, they've distorted his position, and the entire business thereafter is based on the mistaken notion that his position requires a rational defense.
2. Mistake #2. Another distortion of Thrasymachus' position. Having sufficient power means that your reputation doesn't matter. In fact, having a reputation for violence, wickedness, irascibility, and rapaciousness is good for people who have those characteristics because it makes people afraid of them and more compliant. DUH!
(1) If we stick to the strict language of philosophical problems, and consider philosophy to be a search for solutions, then I do mean that. Life's problems are not philosophical. I don't think philosophy is a solution-machine, either.
(2) If we broaden the terms, and say ask whether life poses philosophical questions, then I think it does. And I think this is where philosophy is at home, answering, and wondering, in response to questions.
I'll illustrate this with a brief look at a motivating moment in Plato's Republic. The passage I have in mind is when Socrates responds to Thrasymachus' claim that justice is the advantage of the stronger. Thrasymachus is not clear about this, his position ends up being incoherent, but what it amounts to is the view that the best life is one spent using power and wealth to acquire more power and wealth, and to hell with everybody else. Socrates demonstrates many problems with this position (for instance, what happens when another tyrant comes along, or when the people you depend on to produce the wealth you steal become totally corrupt or die out), but Thrasymachus doesn't care. In this, Thrasymachus is consistent. The ethical, political, and practical faults in his position don't matter to him. One imagines that, when these arise, he'll be happy to smash opponents, buy new slaves,
Socrates does not effectively refute him, in my view, because the argument doesn't continue. Glaucon and Adeimantus take it up, clarify it and try to make it coherent,1 and demand Socrates to show how it could possibly be better to have the reputation for total vice and be punished and persecuted for it despite being good, than to have the reputation for goodness and be rewarded and praised for it despite being wicked.2
Two ways to look at this. One is as an allegedly philosophical problem: the problem of how to get people to be good, or of how to be good, or of how to have a good life. In my view, Socrates necessarily fails to solve this problem, because it's not a problem that can be solved by philosophy. How do I know? Big hint: Thrasymachus has left the building! Socrates talks up Glaucon and Adeimantus, while the problem, Thrasymachus, is off gleefully
The other way to look at it is as a philosophical question: the question of the meaning of virtue and vice, the meaning of a good life, and of why people like Thrasymachus seem so happy when the rest of us poor slobs aren't. Now we're talking -- literally, since that's exactly what they do. And they have a good time, and they don't hurt anybody while doing it.
Life poses philosophical questions (or perhaps this is better phrased as opportunities for philosophical questioning) all over the place, all the time. The question of the good can pop up with the toast out of the toaster. Which is great for us, because it offers consolation when people like Thrasymachus beat us up and steal our money.
--
1. Mistake #1. Thrasymachus' position is most accurately put incoherently, because he doesn't give a shit about listening to reason. Having power means you don't have to listen to reason. So, they've distorted his position, and the entire business thereafter is based on the mistaken notion that his position requires a rational defense.
2. Mistake #2. Another distortion of Thrasymachus' position. Having sufficient power means that your reputation doesn't matter. In fact, having a reputation for violence, wickedness, irascibility, and rapaciousness is good for people who have those characteristics because it makes people afraid of them and more compliant. DUH!
Sunday, August 04, 2013
philosophy is unnecessary
We don't need philosophy.*
The American philosopher John Dewey proposed that genuine inquiry could only arise as a result of a real, practical problem, and could only last until some solution to the problem arose. Most of the history of Western philosophy has pursued two kinds of problems: problems about knowing, and problems about doing. Let's call those epistemology and ethics.
In everyday life, in our dealings with the world, in our conduct toward one another, no problems arise that would call for the study of epistemology or ethics. That's not to say we have no problems when it comes to knowing or doing. In fact, we spend a lot of time and resources trying to solve them or dealing with the fallout when, instead of solving them, we act without thinking and create big messes. But the problems are not philosophical. They don't call for a philosophical study of epistemology or ethics.
Here's an example. I recently wrote email to a listserv about the response I got to a paper on faculty ethics and tenuous employment status. The paper was philosophical. It was asking about what ethics could mean, given that the kinds of ethical codes traditionally written and applied to professional work just don't fit tenuous faculty work. I proposed a way to consider the work of ethics, drawing from Michel Foucault, as a way faculty could consider who they themselves are, what kinds of moral subjects they might be or become, and on that basis, make a deliberate choice about the moral regime or code they would follow.
Someone responded by taking exactly the wrong bait (granted, this person never actually read my paper, so she was only taking the wrong bait of the email description of the response I got, and was writing from a position of ignorance). She said, more or less, that the tenure-track faculty should answer for their crimes, and that this, obviously, was what an "ethics" discussion of tenuous faculty would call for.
This illustrates very well the kind of problem people want ethics to solve, and why philosophy is unnecessary. She wanted to blame people, at the very least. She wanted philosophy -- or something -- to provide her a tool or an excuse to blame the people she wanted to blame. But philosophy doesn't do that. It's useless for the kind of moral judging, shaming, persecuting, and executing that people want ethics for. (What she really needed was sophistry or rhetoric.)
Another, much briefer illustration. Every so often, sciences get into tangles about their own basic systems of belief. Laws that had predicted and understood natural phenomena lo these many years sometimes go kaflooey, and then the sciences freak out, because they need a basic system of belief in order to do science work: running experiments, collecting grants, inventing new ways humans can fuck things up, etc. Where do the sciences go when their basic systems of belief go kaflooey? Not to philosophy, and for good reason. Philosophy would start theorizing about concepts like certainty and truth, their connection to perception, the connection between all that and what we mean when we say the world or the universe. That's not the problem of knowledge the sciences undergo.
Where does this leave philosophy? What is it? It looks like an extravagant, excessive, willful diversion from problems.
--
* This might be my answer to the question I posted in March: How can anyone take philosophy seriously?
The American philosopher John Dewey proposed that genuine inquiry could only arise as a result of a real, practical problem, and could only last until some solution to the problem arose. Most of the history of Western philosophy has pursued two kinds of problems: problems about knowing, and problems about doing. Let's call those epistemology and ethics.
In everyday life, in our dealings with the world, in our conduct toward one another, no problems arise that would call for the study of epistemology or ethics. That's not to say we have no problems when it comes to knowing or doing. In fact, we spend a lot of time and resources trying to solve them or dealing with the fallout when, instead of solving them, we act without thinking and create big messes. But the problems are not philosophical. They don't call for a philosophical study of epistemology or ethics.
Here's an example. I recently wrote email to a listserv about the response I got to a paper on faculty ethics and tenuous employment status. The paper was philosophical. It was asking about what ethics could mean, given that the kinds of ethical codes traditionally written and applied to professional work just don't fit tenuous faculty work. I proposed a way to consider the work of ethics, drawing from Michel Foucault, as a way faculty could consider who they themselves are, what kinds of moral subjects they might be or become, and on that basis, make a deliberate choice about the moral regime or code they would follow.
Someone responded by taking exactly the wrong bait (granted, this person never actually read my paper, so she was only taking the wrong bait of the email description of the response I got, and was writing from a position of ignorance). She said, more or less, that the tenure-track faculty should answer for their crimes, and that this, obviously, was what an "ethics" discussion of tenuous faculty would call for.
This illustrates very well the kind of problem people want ethics to solve, and why philosophy is unnecessary. She wanted to blame people, at the very least. She wanted philosophy -- or something -- to provide her a tool or an excuse to blame the people she wanted to blame. But philosophy doesn't do that. It's useless for the kind of moral judging, shaming, persecuting, and executing that people want ethics for. (What she really needed was sophistry or rhetoric.)
Another, much briefer illustration. Every so often, sciences get into tangles about their own basic systems of belief. Laws that had predicted and understood natural phenomena lo these many years sometimes go kaflooey, and then the sciences freak out, because they need a basic system of belief in order to do science work: running experiments, collecting grants, inventing new ways humans can fuck things up, etc. Where do the sciences go when their basic systems of belief go kaflooey? Not to philosophy, and for good reason. Philosophy would start theorizing about concepts like certainty and truth, their connection to perception, the connection between all that and what we mean when we say the world or the universe. That's not the problem of knowledge the sciences undergo.
Where does this leave philosophy? What is it? It looks like an extravagant, excessive, willful diversion from problems.
--
* This might be my answer to the question I posted in March: How can anyone take philosophy seriously?
Saturday, August 03, 2013
are blogs dead?
Maybe.
I haven't been using mine as intensively as in summers past for thinking my little thoughts. This summer has been different from summers recently past. I've been doing some stuff off-label. I ended up not having a lot to say about Bataille or Bachelard, only a little to say about Husserl, nothing at all about Sloterdijk, and until now nothing to say about Levinas. I don't think I've been as driven, or as narrowly focused, as in recent summers.
Partly, I think I'm still running out the implications of the intensive work I did over two summers ago. That has churned up things to track down and write about embodiment, passivity, erotic experience, normality and abnormality, and now subjectivity and consciousness (that's why I'm reading Levinas). I'm also still writing about faculty subjectivity and ethics in relation to tenuous-track employment status. In short, doing the kind of work that puts things in order, ties up loose ends, and so on, has taken up more time and space, and so I'm reading more broadly and less intensively.
I'll probably write something in this space about Levinas soon. He's starting to bug me.
I haven't been using mine as intensively as in summers past for thinking my little thoughts. This summer has been different from summers recently past. I've been doing some stuff off-label. I ended up not having a lot to say about Bataille or Bachelard, only a little to say about Husserl, nothing at all about Sloterdijk, and until now nothing to say about Levinas. I don't think I've been as driven, or as narrowly focused, as in recent summers.
Partly, I think I'm still running out the implications of the intensive work I did over two summers ago. That has churned up things to track down and write about embodiment, passivity, erotic experience, normality and abnormality, and now subjectivity and consciousness (that's why I'm reading Levinas). I'm also still writing about faculty subjectivity and ethics in relation to tenuous-track employment status. In short, doing the kind of work that puts things in order, ties up loose ends, and so on, has taken up more time and space, and so I'm reading more broadly and less intensively.
I'll probably write something in this space about Levinas soon. He's starting to bug me.
Friday, July 26, 2013
crazy people and philosophers (and other academics)
We were at a philosophy conference this past week. It was good.
A couple of papers dealt directly or indirectly with mental illness, which led to a discussion of mental illness among faculty. The group there assented generally to the idea that academics "are all OCD" and many are more significantly sick. This was amusing to all.
Meanwhile, I was reading Jung during respites from the conference itself, and came across this passage:
Now that I'm reading Jung's account of the extraverted personality and its unconscious, I'm seeing this behavior in a different way. There's something weirdly self-inflating about the self-diagnosis. It places one on a strange kind of pedestal, I think. It creates a status, a twisted status no doubt, but one prevalent in academia and one with related echoes.
Academics constantly speak of how busy they are, how frenetic their work schedules are, how many deadlines they are under, and how seldom they meet deadlines because they take on too much work. We chortle to one another about our poor social skills, poorer social lives, often our poor health and eating habits, chemical dependencies, and other marks of malaise.
This is a bizarre expression of arrogance and self-aggrandizement, according to a value system we adopt to be full-fledged members of the academy. Sickness, self-imposed sickness, physical, social, and psychological deformities, are virtues in this system.
And so, we recognize ourselves and one another (to the extent we do recognize one another--see social deformities, supra) as super-functioning pathological cases, in a gesture that expresses astounding antipathy for the truly and severely ill, and profound alienation from ourselves, one another, our communities, our humanity, and, yes, our work.
I may start to experiment this fall, responding to all the myriad expressions of this habitus, by saying something about my health, well-being, and free time. I suppose that means I'll be telling stories of cycling, guitar playing, and writing music.
A couple of papers dealt directly or indirectly with mental illness, which led to a discussion of mental illness among faculty. The group there assented generally to the idea that academics "are all OCD" and many are more significantly sick. This was amusing to all.
Meanwhile, I was reading Jung during respites from the conference itself, and came across this passage:
So the difference between [the sick person] and Schopenhauer is that, in him, the vision remained at the stage of a mere spontaneous growth, while Schopenhauer abstracted it and expressed it in language of universal validity... A man is a philosopher of genius only when he succeeds in transmuting the primitive and merely natural vision into an abstract idea belonging to the common stock of consciousness. This achievement, and this alone, constitutes his personal value, for which he may take credit without necessarily succumbing to inflation. But the sick man's vision is an impersonal value, a natural growth against which he is powerless to defend himself, by which he is swallowed up and "wafted" clean out of the world... The golden apples fall from the same tree, whether they are gathered by an imbecile locksmith's apprentice or by a Schopenhauer. ("The Relations Between the Ego and the Unconscious," The Portable Jung, 90f.)I know a lot of academics who are quick to self-diagnose. I also know a lot of academics who are the objects of bona fide psychiatric diagnoses, myself among them.
Now that I'm reading Jung's account of the extraverted personality and its unconscious, I'm seeing this behavior in a different way. There's something weirdly self-inflating about the self-diagnosis. It places one on a strange kind of pedestal, I think. It creates a status, a twisted status no doubt, but one prevalent in academia and one with related echoes.
Academics constantly speak of how busy they are, how frenetic their work schedules are, how many deadlines they are under, and how seldom they meet deadlines because they take on too much work. We chortle to one another about our poor social skills, poorer social lives, often our poor health and eating habits, chemical dependencies, and other marks of malaise.
This is a bizarre expression of arrogance and self-aggrandizement, according to a value system we adopt to be full-fledged members of the academy. Sickness, self-imposed sickness, physical, social, and psychological deformities, are virtues in this system.
And so, we recognize ourselves and one another (to the extent we do recognize one another--see social deformities, supra) as super-functioning pathological cases, in a gesture that expresses astounding antipathy for the truly and severely ill, and profound alienation from ourselves, one another, our communities, our humanity, and, yes, our work.
I may start to experiment this fall, responding to all the myriad expressions of this habitus, by saying something about my health, well-being, and free time. I suppose that means I'll be telling stories of cycling, guitar playing, and writing music.
Friday, July 12, 2013
normal, abnormal, and problems
From the standpoint of persons who regard themselves as normally sexed, their environment has a perceivedly normal sex composition. This composition is rigorously dichotomized into the ‘natural,’ i.e., moral, entities of male and female.
For such members perceived environments of sexed persons are populated with natural males, natural females, and persons who stand in moral contrast with them, i.e., incompetent, criminal, sick, and sinful.
The members of the normal population, for him the bona fide members of that population, are essentially, originally, in the first place, always have been, and always will be, once and for all, in the final analysis, either 'male’ or ‘female.’
-- Harold Garfinkel, "Passing," in The Transgender Studies Reader, pp. 59, 62, 62
This illustrates starkly why normality matters. I assume that, 53 years after Garfinkel published “Passing,” a sizable minority of the population of the US understands that the characteristics of sexed bodies range along spectra of both genotypic and phenotypic traits, as well as that sexual behavior is wide-ranging. This seems pertinent to a slight shift in attitudes toward sexual variation and what I am sorry to have to call tolerance toward abnormalities. (Easy, I suppose, for the polymorphously perverse to say.) The assumption of sex binarism remains powerfully normative.
So when Husserl analyzes the “normality” of the prevailing surrounding world, as the background horizon of all our everyday activity, it’s hard to avoid reading that word in the same sense as Garfinkel’s usage—which I think is basically also Foucault’s, and Sara Ahmed’s. Foucault’s work on power/knowledge, particularly The History of Sexuality, is usually interpreted as a critique of institutional normalization as a process of the production of regimented bodies. Ahmed, in Queer Phenomenology, develops a quasi-phenomenological critique of the phenomenology of orientation and normality. Looking back from this standpoint on Husserl’s presumably phenomenological account of normality, I see a very strange equivocation, or possibly an ambiguity.
Normality can be analyzed on three levels, to start. In my language for these, purely subjective normality is the level of my own perceptual/embodied being in the world. It would entail all that is unique to my own perspective, being six feet tall, of very acute hearing in the left and some deafness in the right ear, of very myopic but focused vision in the right and less myopic but poorly focused vision in the left eye, etc. For me, my aural, visual, etc. perception is normal as per these peculiarities. What is abnormal for me is distortion in the perceptual field, for instance when I first put on new glasses, or when there’s water in my ears from swimming. As I adjust to the new glasses, the anomalous motivates a reconstitution of the normal, that is, a new normal, which then prevails, becomes sedimented as “just how it is for me,” and disappears into the horizon.
Intersubjective normality pertains to the everyday world shared with others. For Husserl (in contrast, I think, ultimately, with Heidegger), the intersubjectively normal surrounding world involves actively as well as passively shared meanings, events, constructions, etc. Communication and community are whereby there comes to be a real world, an objective world for us, and it is in reference to this real world that intersubjective normality has its crucial significance. This is one level at which the idea becomes important that the real world is corrective.
Were I to exist solus ipse, my purely subjective normal perceptual life would obtain, always and everywhere—given the caveat that distortions, error, etc., serve as self-correctives, in reference only to purely subjective further intendings. But, obviously, others’ perceptual lives and their actions matter to me and are part of my own experience. Intersubjective normality is there for me because others are. Were I to perceive and act as if trees were murderous, or as if human beings should wiggle on the ground to get around rather than ambulate, it would matter for others that I did so, and it would matter for me that others did not. My abnormal perception and action would appear as abnormal for me and for others. How?
If we stop right here, we have the problem of normality and abnormality: for us, intersubjectively, the presence of abnormality produces a mini-crisis of meaning and the presumptive unity of the world. We are motivated by the real-world assumption to wonder and problematize the abnormal, and to seek some correction, as Husserl says. Now, Husserl resolves this problem, in all I’ve read on the matter, too quickly, in turning to what I consider a third level of normality. To me, the problem of normality and abnormality is most acutely present at this intersubjective level. A person directly in front of you, terrified that the tree is going to kill him, or you, matters right now for you and for that person. You are motivated to correct and to restore the presumptive unity of the real world, because it matters whether this person is right about the trees, and because it matters that this person in front of you has this belief, because this person’s conduct takes place in the same world. (Isn’t that how it is that “crazy people” are terrifying? They induce crises, that we must resolve, by some alteration of our concept of the world, or at the very least in our own conduct—avoiding them, helping them, realizing they are right, etc. For those moments, the real world, the horizonal context of all our everyday activities, is shaken, if only just a little.)
One way to resolve the problem of normality and abnormality is in reference to a standing tradition, or culture, and this is what Husserl does. l want to call this sedimented normality to refer to its being in the ground of so much of the real world as such, and also to allude to the use of the term sedimented in phenomenology as the institution of passively accrued meaning, via actively lived experience. This also helps articulate why I think Husserl jumps the gun: if the intersubjective problem of normality is active, present, here and now, his resolution by reference to sedimented normality reverts to a passively accrued “there is.” Adjudicating the problem of normality through sedimented normality really just ignores the problem. Maybe Husserl is right about this, when he says about understanding the foreign, that because he is raised European, German, and as a small-city resident, the foreign person’s lifeworld will only be understandable in analogy to his own. That is to say, the sedimented normality of the presumptively real European, German, small-city world obtains, because it is there.
In some ways, and to some extent, Husserl does have to be right about this. I can not undo my being raised as I was. But it does not help explain how that way of life, and that tradition, became. The sedimentation of tradition is going on, incrementally, in those minute intersubjective dealings, it seems to me.
And, obviously, traditions are revised with each generation. Normality shifts, slowly, or there is a more significant crisis, and tradition loses its traditional status as it becomes an object of deliberation, critique, understanding, revision. There is a moment in Husserl’s analysis for this critique, and he acknowledges this even though he does nothing much with it. That’s when the equivocation or ambiguity of normality matters. In that intersubjective, problematic moment, we are confronted by the fact that normality is constituted, and re-constituted, and open-ended.
Friday, July 05, 2013
abnormality, universality, reality, world
Abnormality is a serious problem for a philosophy that grounds objectivity and truth on intersubjective reality, as does Husserl's phenomenological philosophy. (Let me boil this down. In Husserl's view, I think, there's objective empirical science, and truth, because there is a world that is universally real for all people. That means, in short, that our actions always being motivated by and directed toward that same world. In turn, there is such a world because, between us, our community, our communication, and our being human is based on our being with one another and recognizing one another as human. So, the very fundamental basis of objectivity and truth is our being interconnected and sharing this common world of experience.)
If there's abnormality, as Husserl says himself in the texts collected in Husserliana XXXIX -- Die Lebenswelt [The Lifeworld] -- there seems to be contradiction within this common, universal world. In that case, its unity, and hence its universality, would seem to fail. Now, if that fails, so to does the ultimate warrant of addressing objectivity or truth.
Husserl addresses this in terms of there being normal and abnormal experience. His example in text number 16 of Die Lebenswelt is about the normality of color-sightedness and the abnormality of color-blindness. They each deal with the world in terms of their own way of seeing, even though this seems to mean the world they share in common harbors a contradiction. Husserl's extremely dissatisfying answer, in this text, is, that they acknowledge that each sees the same world, the same things, but differently. Oooooo-kay, but this isn't really resolving anything. His examples are so general that they're superficial, almost meaningless.
This matters to me as an intriguing philosophical question. But it matters more as a practical problem in the world. I'll get at this two ways, one through more academic philosophy, the other through everyday life.
I now read almost all philosophy through Jean-François Lyotard's book The Postmodern Condition (1978). In this book, Lyotard asserts that the current state of knowledge is characterized by "incredulity toward metanarratives" that serve to give warrant to the discourses that generate knowledge. In effect, his claim is that the connection between reality itself and the discourses that claim to tell us about reality is one that is now doubtful. Physics, for instance, used to be grounded in a claim either to be able to present the whole truth about the reality of bodies in motion, or to be able to make life better for us by making nature our servant. Neither of those are claims that physics can make for itself, because they aren't claims about bodies in motion, but claims about what the study of bodies in motion can do. So, they are not scientific knowledge claims, but narrative knowledge claims -- stories about the role of physics in the world. But those stories are no longer credible, the first because physics itself has led to the discovery of the limits of objective knowledge in physics (viz. Heisenberg), the second because physics has allowed us to build bombs that threaten to blow up the world and all the physicists with it.
Here's why, in everyday life, this matters. In the postmodern condition of incredulity toward metanarratives, we have technological apparatus of scientific knowledge, including all the stuff we make out of it, but without the grounding of those claims on a reality principle. So, we live in a world of competing and contradictory claims about reality. The simplest example of this is the "debate" over global climate change. In this debate, there are 97% or so of people with backgrounds in science discourses, who all agree that there is global climate change, that it is a problem, and that human activity contributes to this change. Then there are 60% or so of US Republicans, who do not believe in global climate change, regardless of their backgrounds. Rich members of this latter group fund "research" institutions that generate "knowledge" that climate change is not real, or not significant, or not caused by human action, or not a problem, or caused by trees, etc. (I note in passing the lovely Democritean skepticism of this argument. It's like Metrodorus' On Nature: there is no global warming; if there is, we can't know anything about it; if we can know anything about it, it's not important; etc.)
Under the postmodern condition, with the connection between knowledge-generating discourses and reality severed, these competing, contradictory knowledge claims co-exist, but their co-existence is untenable. They cannot both be correct. (This is assuming that the climate change detractors are not cynically pursuing profit, which is certainly possible.) It matters very much who is right, and so it matters very much that we have some way of addressing this contradiction.
We don't. We vote on it, which is as absurd as voting on whether the things we perceive have color or not. Reality being intersubjectively grounded does not mean we vote on what's real. It means that there is a reality, a universal world, to which we can all refer for adjudicating our differences, and toward which each of us is directed, and in reference to which a perspective is normal or abnormal. Or else.
And so far, Husserl's response to this major problem is, yeah, we deal.
If there's abnormality, as Husserl says himself in the texts collected in Husserliana XXXIX -- Die Lebenswelt [The Lifeworld] -- there seems to be contradiction within this common, universal world. In that case, its unity, and hence its universality, would seem to fail. Now, if that fails, so to does the ultimate warrant of addressing objectivity or truth.
Husserl addresses this in terms of there being normal and abnormal experience. His example in text number 16 of Die Lebenswelt is about the normality of color-sightedness and the abnormality of color-blindness. They each deal with the world in terms of their own way of seeing, even though this seems to mean the world they share in common harbors a contradiction. Husserl's extremely dissatisfying answer, in this text, is, that they acknowledge that each sees the same world, the same things, but differently. Oooooo-kay, but this isn't really resolving anything. His examples are so general that they're superficial, almost meaningless.
This matters to me as an intriguing philosophical question. But it matters more as a practical problem in the world. I'll get at this two ways, one through more academic philosophy, the other through everyday life.
I now read almost all philosophy through Jean-François Lyotard's book The Postmodern Condition (1978). In this book, Lyotard asserts that the current state of knowledge is characterized by "incredulity toward metanarratives" that serve to give warrant to the discourses that generate knowledge. In effect, his claim is that the connection between reality itself and the discourses that claim to tell us about reality is one that is now doubtful. Physics, for instance, used to be grounded in a claim either to be able to present the whole truth about the reality of bodies in motion, or to be able to make life better for us by making nature our servant. Neither of those are claims that physics can make for itself, because they aren't claims about bodies in motion, but claims about what the study of bodies in motion can do. So, they are not scientific knowledge claims, but narrative knowledge claims -- stories about the role of physics in the world. But those stories are no longer credible, the first because physics itself has led to the discovery of the limits of objective knowledge in physics (viz. Heisenberg), the second because physics has allowed us to build bombs that threaten to blow up the world and all the physicists with it.
Here's why, in everyday life, this matters. In the postmodern condition of incredulity toward metanarratives, we have technological apparatus of scientific knowledge, including all the stuff we make out of it, but without the grounding of those claims on a reality principle. So, we live in a world of competing and contradictory claims about reality. The simplest example of this is the "debate" over global climate change. In this debate, there are 97% or so of people with backgrounds in science discourses, who all agree that there is global climate change, that it is a problem, and that human activity contributes to this change. Then there are 60% or so of US Republicans, who do not believe in global climate change, regardless of their backgrounds. Rich members of this latter group fund "research" institutions that generate "knowledge" that climate change is not real, or not significant, or not caused by human action, or not a problem, or caused by trees, etc. (I note in passing the lovely Democritean skepticism of this argument. It's like Metrodorus' On Nature: there is no global warming; if there is, we can't know anything about it; if we can know anything about it, it's not important; etc.)
Under the postmodern condition, with the connection between knowledge-generating discourses and reality severed, these competing, contradictory knowledge claims co-exist, but their co-existence is untenable. They cannot both be correct. (This is assuming that the climate change detractors are not cynically pursuing profit, which is certainly possible.) It matters very much who is right, and so it matters very much that we have some way of addressing this contradiction.
We don't. We vote on it, which is as absurd as voting on whether the things we perceive have color or not. Reality being intersubjectively grounded does not mean we vote on what's real. It means that there is a reality, a universal world, to which we can all refer for adjudicating our differences, and toward which each of us is directed, and in reference to which a perspective is normal or abnormal. Or else.
And so far, Husserl's response to this major problem is, yeah, we deal.
Sunday, June 23, 2013
why I am not Nietzchean / why I am Hegelian
I'm losing patience with Bataille's chapter on Nietzsche. Maybe this is because of Bataille's interpretation, but it fits with just about everything else I've ever heard or read people to say about Nietzsche, so I think what's really going on here is that I just don't get Nietzsche.
What I understand about Nietzsche is mostly what I remember from reading him as a kid. Like almost all the male philosophy majors I've ever come across, reading Nietzsche propelled me into philosophy. Raised in a fairly ordinary conformist, authoritarian, white US way, Nietzsche was subversive, a vicarious expression of my own inarticulate rage, a source of quotations to use in aggressive confrontations. He seemed to provide a way out, an alternative to the oppressive regime of God-Father-State-Capital. And maybe he does.
From what I understand, though, his alternative is an affirmation of sovereign will toward life, no matter what. This is usually taken to mean embracing the urgency of the present (and eternal) moment, with no time to reason through options. There are no reasons, and there are no options, to this embrace. Reason, as Nietzsche might himself have said, paraphrasing ironically, is always too late to the scene to provide real guidance. When we imagine that reason guides us in those moments, either we fool ourselves into believing our own post hoc rationalizations, or we let others fool us into servility to their God, Law, or System.
I can't take it seriously (and I can't take Bataille taking it seriously, seriously -- of which more in a moment). Bataille suggests there is a basic and binary choice about how to live: either objectively, for the sake of something, for which we produce and accumulate and save; or subjectively, for the sake of nothing, consuming without end, in sovereign transgression.
When I get the rare chance to talk to anyone about Hegel, I tell them that the most important thing to remember about Hegel is that for him, every dichotomy is a false dichotomy. Notice how Bataille's Nietzschean gambit lines up productivity and accumulation with servility specifically to God-Father-State-Capital, as though the only end there could be would be so external and extrinsic. (By the way, you could just as well replace Capital with Communism, which Bataille does consider an objectifying and enslaving end as well.)
The thing is, I'm with Hegel, and not just on this. Nietzsche and his progeny (hah! Take that!) declare independence from the slow, inexorable, tedious workings of a System by fiat: "God is dead" or "the king is dead" or "let's have an orgy" or whatever. But Bataille's Nietzschean concept of sovereignty is set in the context of a world of Hegelian industry. Sovereignty only has meaning in that context, in opposition to a System of production and accumulation -- it depends on it, in order for there to be anything to consume and expend. It can never be more than a momentary explosion, and not a way of life (except for that one solitary exception, who would be absolutely appalling to live with or witness).
I'm with Hegel because I believe that what I do adds something, whether or not I determine what it is, or can even tell, to the world as a whole. I'm with Hegel because I believe that reason, however late arriving, is the way the whole makes sense, not just to us but for itself. I'm with Hegel because I spend nearly every moment of consciousness and nearly every watt of my energy being productive (though that's a psychological condition, not a philosophical one).
Mostly, I'm with Hegel because I am a pessimist like he was, because I believe that this productive activity and effort of reasoning continue toward this end that they will never reach, because every current state of events and every current state of knowledge will fall to the negation of contingency, ground to dust under necessity, to become the ground of the next state, and the next. I have no choice but to produce, and what I produce will necessarily be annihilated.
From this angle, sovereignty looks like the happy child's playful destruction of toys.
**
A quick note on Bataille's notion of sovereignty: I see him combining Nietzsche and Hegel in a very peculiar way. Bataille's sovereignty is negative through and through, because, despite his protestations, it's clear that sovereign expenditure does work and has meaning. As he notes about the impurity endemic to all that is human, a human attempt at sovereignty would also be impure. There would be an exception to this exceptional subjectivity, a leak of objectivity and production. For instance, sacrificial expenditures by Aztecs, as he interprets them in volume one, are all in the name of and in service to gods, but the gods are also in service to something else -- the earth, sun, and moon provide for the people's needs. Despite himself, Bataille's got a system in which expenditure and production are two strokes, like systole and diastole (a metaphor he uses as well -- take that!).
What I understand about Nietzsche is mostly what I remember from reading him as a kid. Like almost all the male philosophy majors I've ever come across, reading Nietzsche propelled me into philosophy. Raised in a fairly ordinary conformist, authoritarian, white US way, Nietzsche was subversive, a vicarious expression of my own inarticulate rage, a source of quotations to use in aggressive confrontations. He seemed to provide a way out, an alternative to the oppressive regime of God-Father-State-Capital. And maybe he does.
From what I understand, though, his alternative is an affirmation of sovereign will toward life, no matter what. This is usually taken to mean embracing the urgency of the present (and eternal) moment, with no time to reason through options. There are no reasons, and there are no options, to this embrace. Reason, as Nietzsche might himself have said, paraphrasing ironically, is always too late to the scene to provide real guidance. When we imagine that reason guides us in those moments, either we fool ourselves into believing our own post hoc rationalizations, or we let others fool us into servility to their God, Law, or System.
I can't take it seriously (and I can't take Bataille taking it seriously, seriously -- of which more in a moment). Bataille suggests there is a basic and binary choice about how to live: either objectively, for the sake of something, for which we produce and accumulate and save; or subjectively, for the sake of nothing, consuming without end, in sovereign transgression.
When I get the rare chance to talk to anyone about Hegel, I tell them that the most important thing to remember about Hegel is that for him, every dichotomy is a false dichotomy. Notice how Bataille's Nietzschean gambit lines up productivity and accumulation with servility specifically to God-Father-State-Capital, as though the only end there could be would be so external and extrinsic. (By the way, you could just as well replace Capital with Communism, which Bataille does consider an objectifying and enslaving end as well.)
The thing is, I'm with Hegel, and not just on this. Nietzsche and his progeny (hah! Take that!) declare independence from the slow, inexorable, tedious workings of a System by fiat: "God is dead" or "the king is dead" or "let's have an orgy" or whatever. But Bataille's Nietzschean concept of sovereignty is set in the context of a world of Hegelian industry. Sovereignty only has meaning in that context, in opposition to a System of production and accumulation -- it depends on it, in order for there to be anything to consume and expend. It can never be more than a momentary explosion, and not a way of life (except for that one solitary exception, who would be absolutely appalling to live with or witness).
I'm with Hegel because I believe that what I do adds something, whether or not I determine what it is, or can even tell, to the world as a whole. I'm with Hegel because I believe that reason, however late arriving, is the way the whole makes sense, not just to us but for itself. I'm with Hegel because I spend nearly every moment of consciousness and nearly every watt of my energy being productive (though that's a psychological condition, not a philosophical one).
Mostly, I'm with Hegel because I am a pessimist like he was, because I believe that this productive activity and effort of reasoning continue toward this end that they will never reach, because every current state of events and every current state of knowledge will fall to the negation of contingency, ground to dust under necessity, to become the ground of the next state, and the next. I have no choice but to produce, and what I produce will necessarily be annihilated.
From this angle, sovereignty looks like the happy child's playful destruction of toys.
**
A quick note on Bataille's notion of sovereignty: I see him combining Nietzsche and Hegel in a very peculiar way. Bataille's sovereignty is negative through and through, because, despite his protestations, it's clear that sovereign expenditure does work and has meaning. As he notes about the impurity endemic to all that is human, a human attempt at sovereignty would also be impure. There would be an exception to this exceptional subjectivity, a leak of objectivity and production. For instance, sacrificial expenditures by Aztecs, as he interprets them in volume one, are all in the name of and in service to gods, but the gods are also in service to something else -- the earth, sun, and moon provide for the people's needs. Despite himself, Bataille's got a system in which expenditure and production are two strokes, like systole and diastole (a metaphor he uses as well -- take that!).
Subscribe to:
Posts (Atom)