Photograph of ER holding up a treat for her dog Rufus outside her childhood home in London N10, September 1951. © Geffrye Museum (Museum number:750/2012-15)
One of my former colleagues at the Geffrye Museum, Gregory Salter, has been doing some really interesting work evaluating the material (I typed ‘data’ here first but it isn’t that – he’s considering the content, value and substance of the information and how to collect it effectively) in the Documenting Home archive collections. I have just realised that he is doing the “getting the information IN” bit and I am doing “getting the information OUT” again.
He started by looking at how the information about the home is collected – most usually through photographs and questionnaires, and questionnaires about photographs (there is lots more to discuss here about the process, directed and open questions, language and expectation, but he says it all a lot better). He has considered in some depth the answers to the questions and the descriptions of the photographs and information about taste and behaviour, but the data is not objective or neat (curses!). People don’t use the same words for rooms, meals or activities, they don’t mean the same things when they describe objects or relationships, or relationships with objects, and accounts of taste and behaviour are so subtle and nuanced and subjective that trying to describe even the broadest of patterns is like nailing down jelly.
This is all crashingly obvious, I know, but I set up the documentation standards for this stuff, all neat in my lovely, flexible, very thorough database – anything we could nail down we did; it’s very searchable you know. I like term lists and authorities, I like order – I (used to) MANAGE DATA – so to finally try and get to grips with the content, the real value of all this information, to find that it is not consistent, it won’t line up nicely and that it is reduced by standardisation, is somewhat irritating. Incidentally, all my lovely curatorial colleagues at the Geffrye who actually tried to catalogue (read shoehorn) these rich and diverse collections in the record schema that I inflicted upon them are either nodding sagely or imploding with “I told you so” at this moment.
Information about place can be extracted and played with relatively easily, right down to street level – but what about at a really granular level? There are some wonderful floor plans, lists of rooms and descriptions of layouts in the collection that relate (with varying degrees of success) to photographs, objects and ephemera; how do we get at that and represent it in a way that is meaningful and interesting? I keep referring to this as micro-mapping, but I am sure proper geographers have a term for it.
Again with temporal data – there are dates for the objects, dates of moves and purchases, changes to decor, births, deaths and tenancies, but they sometimes contradict each other, are expressed in ways that make it very difficult to see or map a single trajectory (there often isn’t one), and what is recorded in the catalogue record is often the “production date” for the photograph or document, which in the scanning and creation of a digital copy rather messes things up. (We actually worked really hard to resolve this – so the span of dates for the collection or particular events is there too).
Back to the main question: can we use data visualisation to improve this situation? I think so, and geo-mapping and chronographics are some of the more developed methods of data visualisation, but how to layer the emerging themes, public and personal significance, variations and messiness onto the existing records? How do you capture and represent the fuzziness, not only within one set of material, but across the whole archive? I am beginning to question how useful the catalogue data is now, when the analysis of the content is the bit that is interesting. Of course it does have huge value – in the listing and management of the collections (actually being able to find stuff is useful to any research!) but I think the activity of documenting archives and artefacts often gets conflated with actual understanding or being able to see it clearly.
So, I need a special content-reading-variable-understanding-nuanced-data-extraction device and a multi-layered-space-time-three(or even four)-dimensional-visualisation widget please. Anyone got one of those to hand?
I would really like to hear from anyone working in this area or of any examples of visualising archive data successfully – I have listed a few projects on the links page here. Also I would be very happy to hear what your questions might be and what it is you would like to be able to do with data from archive material like this.