--
JonathanPool - 03 Apr 2007
I reported a pilot experiment I had just conducted, evaluating alternative methods of human disambiguation. I received suggestions for improvements in the experimental apparatus at this talk, and then I ran a revised main experiment.
The study is inspired by the vision of a Semantic Web containing disambiguated content produced by the millions of Web authors, and also by the idea of mass public contributions to corpus annotation.
If you want to look at the experiment from the subject's perspective, you are free to test it at
http://utilika.org/re/aa/test.html. The version you will see there is the revised version, based in part on the valuable comments received at my talk.
My talk was based on my report at
http://utilika.org/pubs/etc/aa/.
The slides illustrating my talk are at
http://utilika.org/pubs/aa/talk/.
One source of subjects for the pilot study and the main experiment was Amazon Mechanical Turk, on which
Bill McNeill spoke in more depth later
this quarter. In the main experiment, I was able to recruit 200 subjects within about 27 hours on Mechanical Turk, paying them $0.75 each for roughly 20 minutes of activity. I also posted invitations on several newsgroups and got about 50 volunteers without pay during the same period. About 5% of the Mechanical Turk subjects performed the tasks so fast that I hypothesized they were responding randomly--something that I'll check as I analyze the data. This suggests the wisdom of including some checks on seriousness in each instrument.