Qu’il s’agisse de discours ou de techniques, il semble vraisemblable que de solides connaissances dans le domaine de la philologie, de la linguistique, et par conséquent des langues anciennes, représentent un avantage certain pour s’orienter dans une réalité contemporaine aux contours toujours mouvants. […] En parallèle, il semble nécessaire de réfléchir aux bases d’un enseignement critique sur les nouvelles technologies numériques; un enseignement qui donne les outils pour comprendre les bases du fonctionnement des technologies de l’information, mais aussi pour reconnaître et analyser les interactions complexes qui se tissent entre elles et nous au niveau de nos processus cognitifs.
Recently, Lisa Spiro asked, how philosophy defines itself with regard to the digital humanities (be sure to read the comments on that as well - some righteous indignation mixed with genuine reflection on some of the problems involved). I agree with the general gist of the post and the commenters: philosophers do not see themselves primarily as ‘digital philosophers’, so they do not tend to mix with the general DH-crowd. Some digital humanists may not know that there is not just one, but two peer-reviewed online encyclopedias of the discipline (both founded in 1995). Another thriving area is podcasting, leading in one case to 17 million downloads, in another case promising to cover the "history of philosophy without any gaps". And, yes, there is even a subreddit on academic philosophy and another one on philosophy of science, both featuring high-quality submissions from over the net, as well as bulletin boards (e. g. in my area of specialisation W4RF). Finally, I would like to point out that one of the most innovative approaches to text processing in the last decade (Pandoc) was coded by a philosopher.
So should we conclude that all is well? For some, the use of new media for dissemination and socializing may not count as doing ‘digital humanities’ in the strict sense of the word. So I will examine in this post a ‘core part’ of digital humanities, namely visualization and its application within philosophy. I became interested in this, because I have grappled with visualizing a subset of the nanopublications collected on http://emto-nanopub.referata.com . Before presenting my own attempts in a follow-up post, I was curious to find out what others have done. And again, there is more to be found than some may think. I will take the liberty to not just present the visualizations made, but to develop some criteria for evaluating them.
Visualization in philosophy
First, let’s examine a definition of visualization that is quoted quite often (I owe this reference to Robert Farrow’s excellent presentation on visualisations in philosophy, “Visual & Philosophical Pedagogy” (2012)):
A visualization method is a systematic, rule-based, external, permanent, and graphic representation that depicts information in a way that is conducive to acquiring insights, developing an elaborate understanding, or communicating experience.1
Technical progress forces us to put a question mark behind the epitheton ‘permanent’ (because by now dynamic graphs can represent data in real time), but this is not that relevant here. I am more intrigued by the two main goals of a visualization: it is meant to communicate insights and experiences to others, or it must help others to gain new insights on their own. So visualizations have a communicative or heuristic function. This suggests in turn two criteria for evaluating visualizations:
Is the content that is communicated relevant (to some audience)?
Does the visualization allow others to gain new insights?
To this I would like to add another criterion:
- Does the visualization allow for critical examination?
This third criterion is essential for digital scholarship (see this post by Mark Sample for further elucidation). Critical examination can be facilitated in different ways: the datasets (and the scripts to ‘polish’ the data for publication) could be provided in a repository. Or, ideally, the visualization itself provides some interactivity that allows its viewer to examine the evidence presented concurrently.
Communist ethics and life on other planets
The first visualization I want to discuss tries to view philosophy "through the macroscope". Chris Allen Sula presents a network graph of the Philosophy subject headings in use the Library of Congress.
The audience In order to assess the prospective audience of a visualization, we must first find out what is shown. The LOC subject headings are a thesaurus, i. e. a controlled vocabulary used to describe briefly the content of books. So what we see in this graph, is a very special perspective on the philosophical enterprise, namely a librarian’s view. The connections between different terms do not represent some original philosophical insight, but an effective system for describing books and other publications - in other words, a system governed by utilitarian motives rather than by grasping some ‘structure of philosophy’. From the disciplinary point of view, it does not really make sense to state e. g. a connection between “nihilism” as a philosophical viewpoint, the concept of “nothing (philosophy)” and the notion of non-being. But for sorting publications in a transparent way, such a connection may make perfect sense.
New insights? A force-directed graph may not be the best tool for examining a hierarchy of concepts. Here’s an alternative view of the same data, using a so-called ‘treemap’:
Here, we see easily (without panning or zooming) that the topic ‘conduct of life’ is of disproportionate relevance (i. e. it has a large number of subtopics) or that ‘social ethics’ is regarded as a subdiscipline of philosophy. For a professional philosopher, such details may be interesting - but mostly from a ‘historical’ point of view, namely as evidence that thesauri develop over time and seem to preserve concerns of the past (or the future: ‘Life on other planets’ is a subtopic of ‘life’, at the upper right hand side of the graph).
Openness? Reuse of data sets is built into the ‘Many Eyes’ visualization service: you must allow this, if you upload data for creating a visualization.
Davidson’s philosophical system
The audience The diagram displays topics in Davidson’s philosophy which may be identifiable for a student having some acquaintance with Davidson’s philosophy. These topics are connected with undirected lines stating that topic X stands in some unspecified relation to topic Y.
The purported audience is again a bit unclear: a student of Davidson must guess what the exact relation between these topics consists in and in which text (or which passage of a text) this relation is stated. For experts, these relations might be evident, but for them such a graph might be less instructive. To use a Sellarsian term, this visualization is a model in dire need of a commentary (as it may be provided e. g. in a class room lecture).
New insights? Gaining new insights may be possible, if the viewer tries to reconstruct the information lacking in the diagram. If a student decided to ‘fill in the gaps’ on his own, this may be a worthwhile learning experience.
Openness? It is unclear what exactly may count as ‘datum’ that is visualized here. Davidson’s texts are readily accessible (though protected by copyright). Bibliographical data for the single topics are not available. The uninformed viewer cannot evaluate the implicit statements made in the diagram or the evidence they may be based on.
What is “the history of philosophy”?
The last visualization I want to discuss has received some attention in the blogosphere and the ‘philosophical social web’: Simon Raper’s take at "graphing the history of philosophy".
The audience Again, we must first reflect a bit on what is shown here: Simon Raper has used ‘DBpedia’, a semantic web service associated with Wikipedia, to load data from Wikipedia articles on philosophers describing the relation ‘influenced by’. Like in the first example, these data have been organised into a network graph. But the graph is introduced as a depiction of “the” history of philosophy. That, of course, is simply not true: It is a depiction of the research done by Wikipedians on the question who influenced whom in the history of philosophy. Someone interested in online encyclopedias may take this graph as starting point for a more thorough investigation of knowledge claims made there. But no philosopher would use the fact that Wikipedia claims an influence of philosopher X on philosopher Y as a premise in an argument. Hence, no philosopher can use the graph itself as a resource for research, i. e. as something serious philosophical claims can be based upon.
New insights? The presentation of the graph in the blog post does not allow to inspect it in detail. For this, the linked vector graphic must be downloaded and viewed in a browser, using the browser zoom and navigation for moving around. But again, the data used for the graph cannot be inspected. For that, it is necessary to search the corresponding articles in Wikipedia by hand.
Openness? The graph is published under a liberal license. The blog post gives sufficient information on how to use the open data provided by Wikipedia for reproducing the graph (and evaluating the decisions made in its production).
Some readers may feel that I have neglected important agents in the field of ‘digital philosophy’ (e. g. the InPho project). Their reuse of the Stanford Encyclopedia of Philosophy is not based on liberal licensing, but apparently on special arrangements. Web scraping and data mining outside the project is prevented by Copyright.
So it may not come as a surprise that the ontology developed by InPho is not licensed for reuse (though you can use an API to search it programmatically). I’m not quite sure why philosophers working in dh-related areas are so much in favor of closed development models. But this resistance against openness may be the most fundamental reason why digital humanists outside philosophy do not perceive philosophers working on digital tools and topics as a part of their community. Or, to quote Mark Sample: "The digital humanities is not about building, it’s about sharing.".
It’s important to see that this is not merely some hippiesque obsession with the knowledge commons. The requirement of openness goes right to the heart of digital scholarship - evaluation presupposes reproducibility. Open data in science should be complemented by ‘open data in the humanities’.
But in order to become more productive in the digital humanities, philosophers must reflect on the methodology of their profession and how it may be enhanced by digital tools. If and insofar as digital humanities are concerned with data, philosophers must begin to reflect on how they may profit from ‘texts turned into data’ (this seems to be the direction the Dutch Axiom [PDF] research group aims to take). So in order to become more competent in the digital humanities, philosophers need better ‘data literacy’. We must learn how to evaluate data and the claims made through them with the same precision that is used by philosophers in disecting texts and arguments.
In the field of visualization, this has the trivial consequence that visualization as scholarship (not as a didactic ploy or a contribution to ‘public philosophy’), must be instructive for scholars and follow accepted standards of the profession. This in turn has two important implications:
Visualizations are no end in itself. Scholarly visualizations must contain information that is relevant for scholars and allows them to gain new insights into their domain.
Statements made in a visualization (and every edge between two nodes is an implicit statement) must have corroboration and it must be possible in principle to evaluate such knowledge claims.
In my next post, the reader will be able to evaluate my own attempts at visualizing nanopublications according to the criteria I have put forward here.
Ralph Lengler, Martin J. Eppler (2007), “Towards a periodic table of visualization methods for management”, IASTED Proceedings of the Conference on Graphics and Visualization in Engineering (GVE 2007), Clearwater, Florida, USA URL: http://www.visual-literacy.org/periodic_table/periodic_table.pdf ↩
I designed the workshop so that it moved through four phases, with the goal of participants ultimately walking away with concrete ideas about how they might integrate digital approaches into their own teaching
I argue here that historians would be well served to expand their notion of what it means to read—as oppose to analyze—a text or set of texts with digital methods.
Posted January 28, 2013
Co-located with NAACL-HLT 2013 June 13 or 14, 2013, Atlanta, Georgia, USA
Submission deadline: March 1, 2013
The amount of literary material available on-line keeps growing rapidly: there are machine-readable texts from libraries, collections and e-book stores, as well as…
Eide, Øyvind, University of Oslo, Unit for Digital Documentation, Norway, email@example.com
How can a reading of a textual description of a landscape be expressed as a map? Maps form a medium different from verbal texts, and the differences have consequences not only for how things are said, but also for what can be said at all using maps. Where are these limitations to be found?
In this abstract, I will discuss the relationship between verbal and map based geographical communication. I have created a model of the geographical information read from a source text, then tried to express the contents of the model as maps. I will show that types of geographical information exist that can be stored in and read from verbal texts, but which are impossible to express as geographical maps without significant loss of meaning.
object of study
I used a set of Scandinavian border protocols from the eighteenth century (Schnitler 1962) as source material for this research. The text is based on interrogations about geography with more than 100 different persons, of whom many presumably did not use maps very much if at all. It was created in a society, or a set of societies, on the brink of the transformation from oral to written cultures, where some were firmly placed within the written culture, while others had only been exposed to the activity of reading texts for a few decades. The voices in the text represent persons coming from different ethnic and professional backgrounds, e.g., Sami reindeer herders, Norwegian farmers, and Danish military officers, thus bringing a set of different perspectives into the geographical conversation.
In 2010, I had a long paper about the history of German translations of Othello rejected by a prestigious journal. The reviewer wrote: “The Shakespeare Industry doesn’t need more information about world Shakespeare. We need navigational aids.” About the same time, David Berry turned me on to Digital Humanities. I got a team together (credits) and we’ve built some cool new tools.
Digital Humanities takes many organizational forms at small liberal arts colleges, ranging from centers to initiatives to working groups to one dedicated scholar. This page indexes and links to the web presence of digital humanities at small liberal arts colleges within the NITLEnetwork. ..
Small Liberal Arts Colleges
Bryn Mawr, Haverford and Swarthmore Colleges, Tri-Co Digital Humanities Initiative
Davidson College, Scholarship in the Digital Age
Hamilton College, Digital Humanities Initiative (DHi)
Haverford College, Tri-Co Digital Humanities Initiative
Lewis & Clark College, Watzek Digital Initiatives
Occidental College, Center for Digital Learning and Research
Richard Stockton College, South Jersey Center for Digital Humanities
University of Richmond, Digital Scholarship Lab
Wesleyan University, Digital Humanities Resource Guide
Wheaton College, Digital Humanities (page is out of date; Wheaton has a faculty working group)