Blog Moved

Friday, August 5, 2016

Make New Files in Some Directory Be Accessible to a Group

My wife Nancy has finally let me move her over to Linux, so now we can easily share access a lot of files on our server, such as photos. But I want her not just to be able to read those files, but also to be able to write them. But, on the other hand, I don't want to make them world-writable. I just want them to be Nancy-writable.

Obviously, the solution is to create a group rghnlw of which we are both members, make that group own the files, and make them group-writable. That's easy enough for existing files. But what about new files? I'd like those also to be owned by the group and to be group-writable.

Making the new files be owned by the group is easy: All we need to do here is make the directory in which these files live setgid, and to make the group in question own that directory (and also any subdirectories). So let's say I've put our common files into /home/common/. Then the first step is:
# chgrp -R rghnlw
# chmod -R g+s /home/common

Now any new files created in /home/common/ will have group rghnlw.

Unfortunately, however, those files will not be group-writable---not if my umask, and Nancy's, are the typical 022. Changing that would be an option, but it would make all files that either of us create group-writable, which is not what I want.

The solution is to use access control lists. There are good discussions of how to use these for this purpose here and here, but I'll summarize as well.

First, we need to enable access control lists for whatever filesystem we are using. In this case, /home/ is mounted on its own partition, the line in /etc/fstab looking like:
/dev/hda3      /home      ext3    defaults        1 2

We need to change this to:

/dev/hda3       /home      ext3    defaults,acl        1 2
And then to activate the new setting, we need to remount:
# mount -o remount /home
# tune2fs -l /dev/hda3
The latter should now show acl as active.
Second, we need to establish the access controls.
# setfacl -d -m group:rghnlw:rw /home/common/
# setfacl -m
group:rghnlw:rw /home/common/
The former makes rghnlw the default group, with read and write permissions; the latter applies this to existing files.

Converting "Stitched" Pages from PDFs

More and more great books (including philosophy) are going out of copyright and so are appearing on public archives, like Project Gutenberg and archive.org. Unfortunately, though, many of these PDFs are constructed in a somewhat odd way, with each page consisting of several separate images that get "stitched" together. So if you try to extract the pages to run them through OCR, say, it ends up looking like you ran the pages through a shredder.

Fortunately, as I noted in an earlier post, we can use ImageMagick to fix this up. I'm having to do this often enough now that I've written a small script to automate the process. Here it is:
#!/usr/bin/perl
my @splits = @ARGV;
my $start = shift @splits;
my $stop = shift @splits;
sub normalize {
        my $in = shift;
        if ($in < 100) { $in = "0$in"; }
        if ($in < 10) { $in = "0$in"; }
        return $in;
}
my $newpage = 1;
while (1) {
        my @files;
        for (my $i = $start; $i < $stop; $i++) {
                push @files, "*" . normalize($i) . ".pbm";
        }
        my $cmd = "convert " . join(" ", @files) . " -append outpage" . normalize($newpage) . ".tiff";
        print "$cmd\n\n";
        system($cmd);
        my $lasttime = scalar @splits;
        last if $lasttime == 0;
        $start = $stop;
        $stop = shift @splits;
        $newpage++;
}

The script can also be downloaded here.

There are two ways to invoke the program.

stitch_pages -n INIT STEP PAGES

In this case, INIT gives the number of the first image (this will usually be 0 or 1); STEP tells how many images are used to construct each page; and PAGES tells how many pages we are constructing.

Obviously, this assumes that there are the same number of partial images for each page. If that is not true, you can use the other form and specify the "splits" manually.

stitch_pages -s SPLIT1 SPLIT2 ... SPLITn

In this case, we will stitch together the partial images SPLIT1 - (SPLIT2 - 1), etc. The last split given should thus be one greater than the last image available.

New Paper: Logicism, Ontology, and the Epistemology of Second-Order Logic

Forthcoming in Ivette Fred and Jessica Leech, eds, Being Necessary: Themes of Ontology and Modality from the Work of Bob Hale (Oxford: Oxford University Press)
 
In two recent papers, Bob Hale has attempted to free second-order logic of the 'staggering existential assumptions' with which Quine famously attempted to saddle it. I argue, first, that the ontological issue is at best secondary: the crucial issue about second-order logic, at least for a neo-logicist, is epistemological. I then argue that neither Crispin Wright's attempt to characterize a `neutralist' conception of quantification that is wholly independent of existential commitment, nor Hale's attempt to characterize the second-order domain in terms of definability, can serve a neo-logicist's purposes. The problem, in both cases, is similar: neither Wright nor Hale is sufficiently sensitive to the demands that impredicativity imposes. Finally, I defend my own earlier attempt to finesse this issue, in "A Logic for Frege's Theorem", from Hale's criticisms.

For the most part, the paper is not terribly technical, but there are some (what I think are) interesting applications of technical work on models of second-order arithmetic toward the end of section 3.