another transcription setup

A few weeks ago I posted this screenshot of transcribing documents using DEVONthink’s sorter and Lightzone. I’ve been working the past week or so on “transcribing” some jail census material from the 1780s that doesn’t lend itself to the sorter, and used this setup instead:

I need tables for these documents. I simply split the screen between preview (for the image) and Word. One could obviously use any word processor or text editor. On the plane this week, I also used this setup to transcribe a couple of cases– including a bestiality case that almost ended in capital punishment in 1800. The defendant argued that one could not assume proximity to an animal’s anterior interior, especially without a full moon, was related to sexual perversion and that he could prove it if the court gave him a chance to re-present his own witnesses. They took pity, and gave him a chance to reargue the case. I don’t know why, but it seems that all of the bestiality cases I have originated in Ecuador’s coastal zone, and never the highlands. If I use this setup for normal cases, I usually will save the files as .rtf, rather than .doc, for more easy manipulation with TAMS Analyzer, which I’m using to to look at patterns in the language of various types of sexual crime prosecutions.

In the case of the jail censuses above, I later take the weekly tallies (which constituted the charts above), and make entries in a Filemaker Pro database for each individual. It’s a large database– heading towards 7500 individual detainees. The weekly files are essentially backups for that database– and I also keep a hard copy of them. The database eases analysis of gender, crime-type, and judge trends.

About

Associate Professor of Early Latin America Department of History University of Tennessee-Knoxville

Tagged with: , , ,
Posted in Apps for Research, Latin American History, Processes, Research and Writing, Uncategorized
3 comments on “another transcription setup
  1. Nicolas says:

    Dear Chad,
    I’ve been following your digital research methodology blog entries with very great interest, and encouraging my students to look at them too (I share with you an interest in Devonthink, but I’m still working on figuring out the overall organization and workflow in which it will take place for me). I will most probably be keeping a relational database component (that’s what I started off with initially as a grad student, with 4D more precisely). I would be really keen to know more precisely how you organize the flow of the data between DTPO, FMP, TAMS, etc. (beyond what you indicate above). Can you index or search FMP data through DTPO, for instance?
    Thanks,
    Nicolas

  2. ctb says:

    Hi Nicolas–
    Thanks for reading! Alas, I have not yet completely worked through the relationship of all the data. When I wrote my earlier entries on DTPO, that was the only program as I was using. It’s very good at what it does, but obviously has its own limits. It was in dealing with these limitations that I started to add again other components. I don’t think there is any way one could search FMP data through DTPO– unless it was exported as CSV or tab-delimited data, or as a pdf report, and then imported/indexed by DTPO. As it stands right now, I keep transcriptions, notes, pdfs, images (indexed), etc. inside DTPO. I export txt files that I want to mark-up, and import them into TAMS, where I do the actual tagging. I’m only using FMP for a database of jail detainees, and input the data directly there. I search that data from within FMP. The tools are doing different things, and so I go to them specifically for those things.

    You know, recently I’ve been thinking a good bit about developing some form of relational database as a central repository of info. Would this make the data more flexible? TAMS now has the ability to set up as multi-user, and in that form utilizes a mysql database for the data. Users can log in from their own computers and sync data with the server, and upload new data/markup to the server. I was thinking that could be an interesting approach to building a research database. You’re still essentially limited to txt and rtf files, which excludes pdfs (ubiquitous, as you know, in academia these days). But, with that mysql, once you know how the tables are set up, it seems to me you could potentially do some more interesting relational analyses with a little forethought and tagging discipline.

    So, I’m curious how have you used 4D.

    -ctb

  3. Nicolas says:

    Hi Chad,
    Your current setup seems to make very good sense, with DTPO as the default data storage location (even for pictures, you write? no slowing down of DTPO observed?), and TAMS and FMP receiving only certain precisely defined types of data. I guess that some of the crucial limitations of DTPO that you mention are markup and relational structure? I once asked about the pros and cons of DTPO vs. relational databases on DT’s user forum, but the answers were somewhat disappointingly one-sided:
    http://www.devon-technologies.com/scripts/userforum/viewtopic.php?f=7&t=8132

    Now, if you retain within DTPO the initial RTF files that you then markup in TAMS, or the weekly tallies that you then process into FMP data, I guess it is probably important to remember (or encode somehow in your DTPO structure a reminder) that modifications of those bits of data should be carried out actually in the TAMS or FMP versions…?

    With regard now to your current issue, do you mean “central repository of info” with regard to all the research data on your hard drive, or with regard to a project that involves several participants? (I was leaning initially for the former — that’s more where my thoughts have been recently — but your comments on using TAMS’s multi-user functionalities seem to point towards the latter.) I have only read about TAMS, but I found your suggestion intriguing. Why use TAMS here, rather than DTPO or FMP? Because it is free and that the work you are thinking of sharing has more to do with text files and/or the markup function? If you are interested in sharing more diverse kinds of data (you mention PDFs), then DTPO with DevonSync might work for instance? — actually, I’m not sure; I really have no personal experience here.

    My use of 4D was devised at the pre-PDF age, if I may say so. Texts were present in physical books and journals, and photocopied extracts; my database stored primarily bibliographic data, quotations and annotations from texts, and field notes. My sense was that the diverse, thickly complex and rather disorderly ethnographic situations that I was studying did not lend themselves to a relational database approach, and so the relational links that I set up were primarily between the tables of data on the one hand and my evolving Thesaurus on the other hand — a way to index all my data with the same controlled set of keywords.

    This system has served me well for 15 years, but now that we are in the PDF and web age, I feel that switching to a setup based on DTPO as the central point of access to the more diverse data I am collecting could be well worth it. Ironically, my new long-term fieldwork project involves recurring objects of a rather regular type (territories, villages, collective rituals, deities and specialists throughout a region), and so I might set up a more complex relational database structure too (probably in 4D, as I am by now familiar with it). Both DTPO tagging and 4D indexing would be based upon the same, continually evolving Thesaurus, although keyword consistency between the two systems probably cannot be maintained through automated means… this is a bit of a drag. (Why keep a Thesaurus? I definitely want to capitalize on this big investment I have made over the years, and although there has been a great improvement in full-text search functionalities — with fuzzy searches, etc. … not to mention DT’s IA — I still believe that my own tagging/indexing cannot be replaced by such automated means, especially for more abstract analytical concepts that often are not present as words in the chunk of data itself.) I still need to figure out the exact workflow however; the treatment of my (still handwritten) fieldwork notes might be something like:
    – constitute primary records in DTPO based upon my chunks of fieldwork notes,
    – whenever relevant, input some of this data into the 4D structure,
    – periodically export the 4D data into a designated DTPO folder to obtain greater coverage in my DTPO searches…
    Besides that, I am still figuring out if I feel comfortable with switching to Sente for my bibliographic needs; the interoperability between Sente and DTPO (e.g., with PDFs in the former and annotations in the latter) has had some ups and downs recently I believe:
    https://sente.tenderapp.com/discussions/suggestions/8-devonthink-integration-is-now-broken-sort-of

    I guess that’s where I am right now…

    Nicolas

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

parecer:
parecer:

Hacer juicio ú dictamen acerca de alguna cosa... significando que el objeto excita el juicio ú dictamen en la persona que le hace.

Deducir ante el Juez la accion ú derecho que se tiene, ó las excepciones que excluyen la accion contrária.

RAE 1737 Academia autoridades
Buy my book!



Chad Black

About:
I, your humble contributor, am Chad Black. You can also find me on the web here.
%d bloggers like this: