Blog Moved

Tuesday, March 12, 2013

Creating PDFs for EBrary Reader

Turns out that Brown has a subscription to at least part of the Ebrary archive of online books. I found this out because I was looking for a book in our library, using the online catalog, and it turned out that said book was available online. Very cool!

Well, kind of cool, until I found out what this involved. By default, you basically have to read the document online. You can download the "EBrary Reader", which is a Java application, and read documents using it. But it's kind of clunky, to say the least. What I wanted was a PDF that I could then use as I wanted. How to get one?

I noticed that the EBrary Reader would allow me to print, so I thought maybe I could print the file as a PDF, since the default print dialog for Fedora lets you do that. Unfortunately, the Java application was not using the system dialog, but a Java dialog, so that didn't work.

A little googling led me to the cups-pdf package, which installs a system-wide PDF printer for the Common Unix Printing System. A quick "sudo yum install cups-pdf" was enough to give me access to that.

The next step was to convert this file to DjVu, which tends to be much smaller than the corresponding PDF. I've done this a million times before, so figured it would be pretty easy. Unfortunately, it was not.

The first step was to run the pdfimages command (from the poppler-utils package):
pdfimages -p file.pdf p
to extract the page images. Imagine my surprise when I got 420 images from a 20 page paper! It turned out that each page was constructed from 21 different images, stacked on top of each other. (To help with download times?)

Fortunately, I've had enough experience with ImageMagick to know this was not a problem that could not be solved. It took a little more googling, and a little experimentation, but eventually I found out that:
for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20; do
  montage p-0${i}*.ppm -geometry +0+0 -background none -tile 1x21 page-$i.tiff;
done
would stack all the images back on top of each other.

So now I had 20 page images, all as TIFFs, and those could then be fed to ScanTailor for processing on the way to creating a DjVu.

Friday, March 8, 2013

I Love Perl

I posted about how to write simple Perl filters before, but I have to say that I just love doing this:
perl -ibak -pe 's/plaintext\(odocstream &(.*?), OutputParams const &(.*?)\)/plaintext\(odocstringstream &\1, OutputParams const &\2, int max_length\)/' *.h *.cpp
How much time did that save from doing it manually?

Thursday, March 7, 2013

The Grand Teaching Experiment (7)

I haven't posted for a while about the teaching experiment, because it was basically on hiatus. For the last few weeks, we have been doing technical material, and it does seem worth lecturing about that. Now, though, we are back to more philosophical material, so we are back to having discussions.
Up this week were Field's paper "Tarski's Theory of Truth" and Etchemendy's paper "Tarski on Truth and Logical Consequence". Both of them are pretty clear, though the dialectical structure of Etchemendy's paper is complicated (as I argue in my own paper on the topic).
The discussions seemed to go pretty well, perhaps better on Etchemendy than on Field, and that perhaps because I have a deeper understanding myself of that paper. As previously, the students' written responses to the papers were very good. Everyone seemed to have a solid understanding of the basic outlines of the arguments, with some students (unsurprisingly) a little ahead of others. But what I'm really coming to appreciate about this way of doing things is that, as we start class, I already have a pretty good idea what people understand and what they do not, and we can focus our attention either on filling in the gaps or else, even better, digging more deeply. And because everything is based on discussion, we dig in the direction the students find interesting, or where their questions naturally lead.
It's definitely clear, as I mentioned in a previous post, that the sorts of detailed reading notes I've been giving the students recently are important to this kind of approach. I've had more than one student remark on this.

Tuesday, February 12, 2013

The Diagonal Lemma: An Informal Exposition

A couple years ago, I got a stream of emails from someone who shall remain nameless, who was completely convinced that there was something desperately wrong with the proof of the incompleteness theorem because of some kind of circularity in the diagonal lemma. I ended up writing him a long email explaining the diagonal lemma in terms of informal syntax, much as Quine does in Mathematical Logic.
I realized shortly thereafter that many of my students are in a position that is not so different. The diagonal lemma just is very hard to understand, in large part because its proof, in most expositions, in bound up with things like Gödel numbering, the representability of recursive functions, and the like, that really don't have very much to do with the diagonal lemma itself. The diagonal lemma is really a fact about syntax, not about arithmetic, and when one explains it in those terms it makes a whole lot more sense.
I therefore converted my original email into a short (five page) document that I've now used in a couple different courses. It can be found on my website.

The Grand Teaching Experiment (6)

Having finally dug out of the blizzard, and finally getting back to work, it's time to start thinking about teaching again. (Brown was closed on Friday, and there was still a parking ban in effect in Providence on Monday.)
Last Wednesday's class was, as I'd warned, on Dummett's paper "Truth". As I mentioned in my last post, the students in the class did an amazing job with the paper. Their responses, posted to the courses Canvas site, were all very, very good. I don't wish to take credit for that. It had, in the obvious sense, nothing to do with me. But the reading guide I posted for them, definitely seems to have helped. I've taught this paper many times, and I've never seen people make so much sense of it.
The discussion in class was at a correspondingly high level. We ended up spending the whole time talking about Dummett's two arguments against the explanatory sufficiency of Convention T.
The first, concerning non-referring names, isn't that hard to understand, but we worked through the question exactly what the argument assumes, and when one does that it becomes clear that it has a very narrow target: Frege, basically.
The second is much more interesting. The rough idea is that, if all there is to say about truth is given by Covnention T, then truth cannot play the role in logic that it is often assumed to play. In particular, the truth-tables can have no explanatory value. But what does that mean? That's the hard question.
Ultimately, I think the answer turns on the notion of truth-functionality: A deflationist cannot really make sense of the notion of truth-functionality. The usual way to try to do so is to talk about inferences like:
  • A & B
  • B <--> C
  • So A & C
But this only works if you assume that the biconditional is itself truth-functional, and there's no reason to assume that. And the same complaint applies even if you try something like:
  • (B & C) v (~B & ~C)
instead of the biconditional. But that's a larger issue.
I don't know how many people have tried giving students extensive reading notes for papers like "Truth". But I'm going to keep doing it, that's for sure, and I'd recommend trying it to everyone.

Wednesday, February 6, 2013

The Grand Teaching Experiment (5)

Monday's class was on Tarski's "Semantic Conception of Truth", which is what Burton Dreben used to call one of Tarski's "popularizations" and refused to take seriously. I like it for teaching, because it gives a nice introduction to Tarski's approach to the liar and to truth generally.
I felt like I fell a bit more into lecturing on this paper than on the others we've discussed. Maybe that was because it's more technical in nature, and I thought there was just stuff people needed to know. For example, Tarski indicates but does not actually argue that satisfaction of Convention T guarantees the extensional correctness of the defined truth-predicate. We needed to see why that is.
Still, the class was a lesson in how easy it is to fall back into talking a lot, something I'll try to avoid in our future sessions.
Today, then, is Dummett's "Truth". It seems, from the written responses I got from the students, as if my detailed reading notes did help. I was very impressed, in fact, with how well they'd done with this difficult paper. We'll see how the discussion goes.

Tuesday, February 5, 2013

Sex as a Jam Session

There's something extremely important happening in feminist re-thinkings of the nature of sexual consent. Namely: the idea that there is something seriously wrong with the language of "consent"; that this language embodies a certain model of sexual interaction that is itself part of the problem. The locus of this work is the book "Yes Means Yes!" and an associated blog. There's no way I can spell out all the ideas here. The rough idea is that there is something oppositional about the language of consent and the "commodity model" of sex that underlies it. Person A has something person B wants, and so B asks A's permission to take or borrow or use it. When you see sex that way, you are just asking for people to take or borrow or use when A doesn't give permission.

So, the suggestion is, we should instead see sex as collaboration. The key advantage to thinking this way is that it now follows immediately that rape isn't sex. Period. You can't force someone to collaborate with you.

Sex educator Karen K. B. Chan has produced a (non-explicit) video that promotes this way of thinking about sex. It is absolutely fantastic stuff. Everyone should see this.

Monday, February 4, 2013

Reading Guide for Dummett's "Truth"

It occurred to me that maybe a lot of people could use a guide to Dummett's paper "Truth". So here is the "reading guide" I produced for my students.
As you'll see, I didn't even try to get into the "anti-realist" stuff at the end. If anyone wants to add some material on that, I can update this for general use.
  • Dummett begins the paper by expounding Frege's claim that sentences refer to their truth-values. It is easiest to understand this claim when it is put differently: that the "semantic value" of a sentence is its truth-value. And what that claim is best understood in terms of the truth-tables: that the central semantic fact about a sentence is that it is true or false.
  • Dummett then rehearses an argument that a sentence cannot refer to the proposition is expresses. The argument is:
    1. "Mark Twain was an author" and "Samuel Clemens was an author" express different propositions.
    2. "Mark Twain" refers to the same person as "Samuel Clemens".
    3. The corresponding parts of the two sentences therefore have the same reference.
    4. Reference is "compositional", in the sense that the reference of the whole is completely determined by the references of the parts.
    5. Hence, the two sentences must have the same reference.
    6. Hence, that reference cannot be the proposition expressed.
    The point of this is really just to introduce the idea of thinking of truth from the perspective of logic.
  • Dummett then suggests that, while it's reasonable to think that sentences do have "semantic values", Frege has to earn the right to say that their semantic value is their truth-value. On pp. 142-3, Dummett introduces an analogy between truth and falsity, and winning and losing, to illustrate what Frege would have to do to earn that right. What exactly does Dummett think Frege would have to do?
  • Dummett then proceeds to argue, on pp. 145-6, that (the propositional version of) the T-scheme may not even be correct. The argument turns on the idea that there may be sentences that are perfectly meaningful—they express proposition—but are neither true nor false. A putative example would be something like, "The greatest prime number is one less than a perfect square". Frege would have held that this expresses a proposition, but does not have a truth-value, due to the fact that there is no greatest prime. Why, then, does Dummett think that:
    It is true that the greatest prime number is one less than a perfect square iff the greatest prime number is one less than a perfect square.
    is not itself true?
  • Dummett then argues, on pp. 146-9, that, even if its instances are all true, the T-scheme "cannot give the whole meaning of the word 'true'". The argument turns on the assumption that the truth-tables have some explanatory value, in particular, that they embody (at least partial) explanations of the sentential connectives. How exactly is this argument supposed to go?
  • On p. 149, Dummett then concludes that a theory of truth must be possible in a certain sense. In particular, he thinks that it must be possible for us to articulate the point of our characterization of assertions as true and as false. There is a sketch of what Dummett has in mind in the paragraph running from p. 149 to p. 150. Try to articulate as best you can what research program he means to be articulating.
  • On pp. 150-4, Dummett then argues in support of a very general claim that he makes on p. 150: that, given the point of the characterization of assertions as true and as false, there is no need, and no room, for any finer characterization, and so that it is senseless to say that an assertion is neither true nor false. The core of the argument is on p. 153, where Dummett suggests that, although we might call both conditionals with empty antecedents and sentences containing non-referring terms "neither true nor false", there is an important asymmetry between the two cases that this common terminology lacks. What is that asymmetry?
  • Finally, on p. 154, Dummett concludes that "we should abandon the notions of truth and falsity", at least in connection with the explanation of the meanings of statements. In fact, however, that isn't quite what he means. He thinks there is a particular way of using "true" and "false" that is unhelpful and another way of using them that would still be OK. What is the difference? And how is it related to Dummett's central thesis about the role of the concept of truth?
  • Dummett then proceeds, on pp. 154-7, to explore whether thre might yet be a point in calling certain statements neither true nor false. He argues that there may well be, but that, if there is, it must necessarily concern the way such statements behave when they occur as parts of other statements (e.g., as antecedents of conditionals). It is morally certain that we will not get to this material, so you do not need to read beyond p. 154. But if you wish to do so, then the question to ask here is just how Dummett's arguments here are supposed to cohere with the earlier ones, and what point he thinks there could be in distinguishing among different ways a statement might be true or false.
  • Finally, on pp. 157-62, Dummett introduces a set of considerations that are supposed to show that the notions of truth and falsity that are appropriate to the evalaution of assertions are not the classical notions of truth and falsity. Rather, calling an assertion "true" is like saying it is justified, and calling it "false" is like saying that it is unjustified. This sort of argument is one that became closely associated with Dummett, and he spent much of his career trying to develop it and to fill in the details. We certainly will not discuss this material.

The Grand Teaching Experiment (4)

I mentioned a couple posts back that we are scheduled to read Dummett's paper "Truth" this Wednesday. As many of you will know, this paper is legendarily hard and is often said to contain every major idea Dummett would spend the rest of his career developing. That is a slight overstatement, but it does indicate the rather frightening density of the paper.
Terrified by this prospect, I decided that what I needed to do was to give the students a whole lot of guidance about how to read Dummett's paper. The result can be found on the course website. We'll see how much it helps.
More generally, I realized when thinking about this that it isn't just Dummett's paper that's hard. All philosophy papers are hard. For many of my more philosophical "survey" courses, I've therefore often provided students with a handful of questions that might help structure their reading, and those sorts of questions are already on the website for this course. When I was lecturing on this material, those sorts of brief questions might have been adequate. But if I'm not lecturing, if I'm essentially expecting the students to do more of the work for themselves, then they're probably not adequate.
So, yesterday, even though we were scheduled to talk about Tarski's "Semantic Conception of Truth" today, I went to the course website and put up a similar reading guide for that paper.
To see the contrast, the original questions were:
What does Tarski think the Liar Paradox shows about our intuitive notion of truth? How is Convention T supposed to be related to our intuitive notion of truth? What are an object-language and a meta-language? How does distinguishing between them help us solve the liar paradox?
What's there now is:

  • Tarksi insists that a definition of truth must be "materially adequate" and "formally correct". What are these two notions?
  • If a definition of truth satisfies convention T, that is supposed to imply that it is in some sense correct. In what sense? and why? Note here the difference between extensional and intensional correctness that Tarski himself discusses.
  • What does Tarski mean by saying that truth is a "semantic" concept?
  • What is Tarski's diagnosis of the Liar Paradox? That is: To what exactly do his conditions (I), (II), and (III) come? To answer this question, analyze the informal presentation of the Liar on pp. 347-8. Where exactly do the three conditions play a role? Is there anything else that plays a role that Tarski is not mentioning?
  • Why exactly does Tarski mean when he says that he will not "use any language which is semantically closed"? Why, if we do that, are we then forced to distingish object-language from meta-language?
  • Tarski says that the meta-language must be "essentially richer" than the object-language if we are going to be able to define truth for the object-language. How exactly must the meta-language differ from the object-language?
And I will plan to structure today's class around these same questions.

    The Grand Teaching Experiment (3)

    Friday, we had our first "discussion" class. I still call them that, although now every class is a discussion class. But these ones are meant to be more wide-ranging and, more specifically, devoted more to criticism and evaluation than to exposition and understanding.
    When I've taught courses with this same sort of structure before—lectures Monday and Wednesday, discussion on Friday—the discussion sessions have rarely flowed well. They had a tendency to turn into question and answer sessions, with not a whole lot of actual discussion. This one was much better, amazingly better given that it was the first one, so early in the semester. So perhaps that is a good sign.
    It doesn't hurt, of course, that the class is full of students with a lot of philosophical experience, and that many of them have taken at least one course with me before, some of them many more than that. But still.

    Friday, February 1, 2013

    Ayer on Truth


    On a more substantive note: Ayer's 1953 paper "Truth" is really under-rated, in my opinion. Did anyone make the now familiar points about the ineliminability of "true" before him? The paper is not often cited, and when it is it is almost always cited wrongly: Volume 25, 1953. It was in issue 25, but volume 7, which makes me kind of suspect that a lot of people who cite it haven't actually read it.
    I'm all the more suspicious since Ayer is so often claimed for the deflationist side, when in fact his view is much more nuanced. The crucial remarks come at the transition between two very different parts of the paper:
    Let it be granted then that we must forego any general definition of truth, and let it also be granted that there are certain contexts in which the words 'true' and 'false'...are ineliminable. It does not follow that there is any mystery about their meaning. On the contrary, their function is quite clear. ...To speak of a sentence, or a statement, as true is tantamount to asserting it, and to speak of it as false is tantamount to denying it. ...[I]t is hard to see what further explanation of [truth] is required.
    Can we say then that we have solved the philosophical problem of truth? What is disturbing about our solution is its simplicity. If that is all there is to it, it is hard to see how anybody, even a philosopher, can ever have been supposed that the question 'What is truth?' presented any difficulty at all. ...All the same they must have known well enough how the word 'true' was actually used. Such information as that it is true that the sky is blue if and only if the sky is blue could hardly be expected to come upon them as a revelation. ...What they would say...is that the provision of these partial definitions did not meet their problem. It remains, therefore, for us to see what this problem can have been and if possible to solve it.
    Ayer then goes on to do exactly that. His understanding of what the problem is, of course, is shaped by his verificationism, but he does think there is another problem that goes by the name "the problem of truth", just as Strawson does.
    Ayer's paper is pretty hard to find. Feel free to ask me for a copy if you need one. Oh, and beware! Ayer also published a paper titled "Truth" in 1963, in his collection The Concept of a Person and Other Essays. There is a fair bit of overlap between the two papers, but they are different papers.

    The Grand Teaching Experiment (2)

    Shortly after I made my first post about how I'm trying something different with teaching this year, I went off to my Wednesday class. The paper for the day was Ayer's "Truth", published in Revue Internationale de Philosophie in 1953.
    This particular class seemed to go pretty well. I laid out the topics I thought we should cover: Ayer's discussions of whether "true" is eliminable, of whether convention (T) can be understood as a definition of truth, and of the metaphysical and epistemological status of instances of (T).
    Most students seemed as if they'd understood the main outlines of Ayer's discussion, so it was easy to get the class to put Ayer's basic claims on the table. We were then able to work through some questions about them pretty effectively. At certain points, I felt it worthwhile to jump in and talk for a bit, but it seems unsurprising that I should need to do that from time to time, especially at the beginning of the semester.
    It helps, of course, that Ayer's paper is, as one would expect with him, extremely clear. Indeed, several students remarked how much they'd appreciated Ayer's straightforward prose after having slogged through Austin and Strawson.
    One lesson here may well turn out to be, then, that if you're going to try to teach a course without lecturing, you have (for the most part, at least) to choose papers that are relatively easy to understand. The ultimate test of that theory will come next Wednesday, when we read Dummett's paper "Truth".

    Wednesday, January 30, 2013

    EBookDroid

    For a while now, I've been trying not to print out copies of papers I'm using for class. I had a Kindle DX a while back, and I tried loading PDFs of papers into it. But the page rendering was so slow it was painful, and a real problem in class itself, since finding a particular passage would simply take too long.
    More recently, I've acquired a Samsung Galaxy Tab 10.1, which makes for an excellent e-reader. It has a nice, bright screen, good resolution (1280x800), and lots of memory. Plus, I can access my course web site, or JSTOR, or whatever, directly from it (and with tools like AndFTP, I can even access my servers using SFTP and public key authentication).
    But as I've mentioned elsewhere, a lot of the material I've scanned for my own use is in DjVu format, so I recently found myself needing to find a DjVu reader for Android.
    It turns out (unsurprisingly, really) that there are several options, but the one on which I've settled for now is EBookDroid, a free and open source (GPL'd) ereader. Not only does it handle DjVu, it's got a lot of other nice features, such as the ability to set lots of named bookmarks (making particular passages easy to find). But one of the nicest things about it is that it will automatically split pages, if the file you are reading happens to have facing pages on a single page, like a photocopy. And it will automatically scale pages to the content area, not to mention wash your laundry and make your dinner. An excellent tool.
    I've seen some people mention that EBookDroid "hijacks" PDF links from the browser, but I have not seen this issue myself. It does register itself as one possible PDF reader, but that is all.



    Scan Tailor

    For quite some time now, I've been putting all the readings (except books) for my various classes online, on my course websites (such as here or here). In the good cases, this just means downloading stuff from JSTOR or some equivalent, or linking directly to it, though I very often convert the PDFs I get that way to DjVu files, since (a) the DjVu files are often a lot smaller (like one tenth the size), and (b) I prefer to have files where two pages appear on a single page, so that when they are printed you don't waste so much paper.
    I've developed a number of tools to make this easier to do. I'll blog about them later. (Some of them are already available here.) What I wanted to mention today is a really, really cool program I found last weekend, called "Scan Tailor". It's "free and open source" (GPLv3) and available for Linux, OSX, and Windoze.
    I've found that the easiest way to scan stuff I need from books (e.g., Dummett's "Frege's Myth of the Third Realm", from Frege and Other Philosophers) is first to make a photocopy of the paper and then to scan that. (This is easiest with a sheet-fed scanner, such as the HP Scanjet 5590, which works fine with Linux.) You can simply make a PDF or DjVu from the result, but, if you just do that, it will usually look like crap. For one thing, you get big black marks along the sides and, often, down the middle (where the spine of the book is). And the pages can be hard to read on screen, since they are often not quite square. Even half a degree's rotation is easy to see, and very annoying.
    So, in the past, I'd load the pages one by one into Gimp (an open source image editor) and fix them up. But this was time consuming, and hard to get right. That is where Scan Tailor comes to the rescue.
    Here's how it works. You put all the page images into some directory, and then you open Scan Tailor and create a new "project", pointing the program at that directory. All the pages then get loaded up. You can then rotate them, if need be. But the really useful bit is that the program will then automatically (i) split the pages in half (which you need to do for the next step), (ii) deskew them (i.e., correct for rotation of the text), (iii) put a bounding box around the actual text (thus eliminating the black marks), (iv) put clean new margins around that text, (v) despeckle the images (remove stray black dots), and then (iv) output the resulting page images to a directory of your choosing, so you can assemble them into a PDF or a DjVu.
    And at every one of those steps, you can intervene to make manual corrections, if need be. It totally, totally rocks.

    The Grand Teaching Experiment

    Over the last couple years, I've become very dissatisfied with the way I've always taught my non-logic classes, e.g., classes on philosophy of language. These tend to be fairly small classes, with enrolment in the range of 10-15, and the basic model has been this: I've lectured on Mondays and Wednesdays, and we have had discussion on Fridays, led by me.
    But I've read several things recently suggesting that lecturing is not a very effective way to get students to learn things. So this semester, in my course on Theories of Truth (Phil 1890D), I'm trying something different. I propose to blog about it from time to time.
    The first thing I'm doing is trying to make use of Brown's new online teaching framework, called "Canvas". It does quite a lot. For example, there is an integrated conferencing system that I may try to use later. And it has simpler stuff, like the ability to schedule assignments, which are then automatically entered into a "grade book". But the main thing I'm using is the discussion board. For each of the readings, I've set up a discussion thread, and I'm requiring everyone in the class to post to it prior to class.
    Obviously, this is just taking the place of the "response papers" that lots of people use, anyway, but it has a few advantages.
    1. It's easy for me to comment on people's responses simply by replying in the discussion thread. As a result, they can get feedback before we meet for class. 
    2. The students' responses, and my comments, are visible to the other students, so there is some opportunity for them to learn from each other. So far, there has only been a little discussion among the students, but I'm hopeful that, as we get into the semester, and as we all adjust to this new system, there will be more. (I've set it up so they have to post before they can see what other people wrote, for the obvious sort of reason.)
    3. Since contributing to discussion is an "assignment", it is linked to the grade book, and I can enter grades (not much more than "did" or "didn't") very easily.
    Much of this, of course, could be done with a Google Group, but the way Canvas automatically generates a syllabus from my discussion assignments is very nice. And I can add the other course assignments, too, so a calendar for the semester is automatically created.
    The second thing I'm doing differently is I'm not lecturing. At all. I told the students this at the first meeting, and when I walked into the first "real" class, I had no lecture notes. I'm trying to run the entire class as discussion.
    The days that would previously have been devoted to lecture are now devoted to discussion that is aimed at understanding the readings. The day that was previously devoted to discussion is now devoted to discussion aimed at evaluating and criticising the readings. We've had two of the former so far (on Austin's and Strawson's famous papers on truth), and I'm not sure yet how they are going. My strategy has been to identify topics from the papers that we should talk about. So, in the case of Strawson's paper, for example, these were: His criticisms of Austin's account of (i) statements, (ii) facts, and (iii) correspondence, and (iv) his own positive account of the use of "true". The first class seemed to go pretty well. The second one, a bit less so, and I ended up talking more. But that may simply have been because Strawson's paper is quite hard, and maybe that is a sign that I should do something else.