Being philosophical about crowdsourced geographic information

By Renée Sieber (McGill University, Canada) and Muki Haklay (University College London, UK)

Our recent paper, The epistemology(s) of volunteered geographic information: a critique, started from a discussion we had about changes within the geographic information science (GIScience) research communities over the past two decades. We’ve both been working in the area of participatory geographic information systems (GIS) and critical studies of geographic information science (GIScience) since the late 1990s, where we engaged with people from all walks of life with the information that is available in GIS. Many times we’d work together with people to create new geographic information and maps. Our goal was to help reflect their point of view of the world and their knowledge about local conditions, not always aim for universal rules and principles. For example, the image below is from a discussion with the community in Hackney Wick, London, where individuals collaborated to ensure the information to be captured represented their views on the area and its future, in light of the Olympic works that happened on their doorstep. The GIScience research community, by contrast, emphasizes quantitative modelling and universal rules about geographic information (exemplified by frequent mentioning of Tobler’s first law of Geography). The GIScience research community was not especially welcoming of qualitative, participatory mapping efforts, leaving these efforts mostly in the margins of the discipline.

Hackney mapping

Participatory Mapping in Hackney Wick, London, 2007

Around 2005, researchers in GIScience started to notice that when people used their Global Positioning System (GPS) devices to record where they took pictures or used online mapping apps to make their own maps, they were generating a new kind of geographic information. Once projects like OpenStreetMap and other user-generated geographic information came to the scene, the early hostility evaporated and volunteered geographic information (VGI) or crowdsourced geographic information was embraced as a valid, valuable and useful source of information for GIScience research. More importantly, VGI became an acceptable research subject, with subjects like how to assess quality and what motivates people to contribute.

This about-face was puzzling and we felt that it justified an investigation of the concepts and ideas that allowed that to happen. Why did VGI become part of the “truth” in GIScience? In philosophical language, the questions ‘where does knowledge come from? how was it created? What is the meaning and truth of knowledge?’ is known as epistemology and our paper evolved into an exploration of the epistemology, or more accurately the multiple epistemologies, which are inherent in VGI. It’s easy to make the case that VGI is a new way of knowing the world, with (1) its potential to disrupt existing practices (e.g. the way OpenStreetMap provide alternative to official maps as shown in the image below) and (2) the way VGI both constrains contributions (e.g., 140 chars) and opens contributions (e.g., with its ease of user interface; with its multimedia offerings). VGI affords a new epistemology, a new way of knowing geography, of knowing place. Rather than observing a way of knowing, we were interested in what researchers thought was the epistemology of VGI. They were building it in real-time and attempting to ensure it conformed to existing ways of knowing. An analog would be: instead of knowing a religion from the inside, you construct your conception of it, with your own assumptions and biases, while you are on the outside. We argue that construction was occurring with VGI.

mapping

OpenStreetMap mapping party (Nono photos)

We likewise were interested in the way that long-standing critics of mapping technologies would respond to new sources of data and new platforms for that data. Criticism tends to be grounded in the structuralist works of Michel Foucault on power and how it is influenced by wider societal structures. Critics extended traditional notions of volunteerism and empowerment to VGI, without necessarily examining whether or not these were applicable to the new ‘ecosystem’ of geospatial apps companies, code and data. We also were curious why the critiques focussed on the software platforms used to generate the data (e.g., Twitter) instead of the data themselves (tweets). It was as if the platforms used to create and share VGI are embedded in various socio-political and economic configurations. However, the data were innocent of association with the assemblages. Lastly, we saw an unconscious shift in the Critical GIS/GIScience field from the collective to the personal. Historically, in the wider field of human geography, when we thought of civil society mapping together by using technology, we looked at collective activities like counter-mapping (e.g., a community fights an extension to airport runway by conducting a spatial analysis to demonstrate the adverse impacts of noise or pollution to the surrounding geography). We believe the shift occurred because Critical GIS scholars were never comfortable with community and consensus-based action in the first place. In hindsight, it probably is easier to critique the (individual) emancipatory potential as opposed to the (collective) empowerment potential of the technology. Moreover, Critical GIS researchers have shifted their attention away from geographic information systems towards the software stack of geospatial software and geosocial media, which raises question about what is considered under this term. For all of these reasons and more we decided to investigate the “world building” from both the instrumentalist scientists and from their critics.

We do use some philosophical framing–Borgmann has a great idea called the device paradigm–to analyse what is happening, and we hope that the paper will contribute to the debate in the critical studies of geographical information beyond the confines of GIScience to human geography more broadly.

About the authors:

Renée E. Sieber is an Associate Professor in the Department of Geography and the School of Environment at McGill University. Muki Haklay is Professor of Geographical Information Science in the Department of Civil, Environmental and Geomatic Engineering at University College London. 

Mapping the “Tribes” of London

By Alex Singleton, University of Liverpool, UK

Our paper, The internal structure of Greater London: a comparison of national and regional geodemographic models, recently published in Geo, explores the geography of where we live to identify 19 distinctive “tribes” that characterise London neighbourhoods. This London Output Area Classification (LOAC) was created in collaboration with the Greater London Authority.

We employ an area classification technique referred to as geodemographics, which are a set of methods that were initially developed in the 1970s (with a model of Liverpool) by Richard Webber. Further details are given our paper, however, in brief, geodemographics are created using a computational technique that compares multiple attributes of areas (e.g demographics, employment, built structures etc.) and places them within clusters aiming to maximise similarity. These are then summarised with names and descriptions.

Within the UK, the Output Area Classification (OAC) is an example geodemographic classification, and was created on behalf of the Office for National Statistics from census data. A classification exists for both 2001 and 2011, and both were built with an entirely open methodology. However, one criticism of national classifications such as OAC is that they do not adequately accommodate local or regional structures that diverge from national patterns, which is an acute issue for London. This can be illustrated with maps of the 2011 OAC for London and the much smaller city of Liverpool.

A map of OAC SuperGroups in Liverpool. Source: http://oac.datashine.org.uk/#datalayer=oac11_s&layers=BTFT&zoom=11&lon=-2.8564&lat=53.4308

A map of OAC SuperGroups in Liverpool. Source: http://oac.datashine.org.uk

 

A map of OAC SuperGroups in London. Source: http://oac.datashine.org.uk

A map of OAC SuperGroups in London. Source: http://oac.datashine.org.uk

The problem with the national classification in context of London is evident from these images, with the majority of London classified into 3 clusters. However, the London classification presents a much more variegated picture of London.

 

A map of OAC SuperGroups in London. Source: http://oac.datashine.org.uk

A map of LOAC SuperGroups in London. Source: http://loac.datashine.org.uk

The best way to view the classification is on the website:  or you can search for your postcode – you can even let us know if you think we got your neighbourhood wrong!

About the author:

Alex Singleton is Professor of Geographic Information Science at the University of Liverpool. Alex’s Geo paper was co-authored with Paul Longley. Paul is Professor of Geographic Information Science at UCL)

References:

Singleton, A. D., and Longley, P. (2015) The internal structure of Greater London: a comparison of national and regional geodemographic models. Geo: Geography and Environment, doi: 10.1002/geo2.7.

Further reading:

  • More London-Liverpool Geodemographics Factoids:

In addition to the first UK geodemographics being created for Liverpool by Richard Webber (also a graduate of the University of Liverpool); and this paper a University of Liverpool / UCL collaboration; one of the earliest examples of area classification within the context of London includes the maps of Charles Booth created between 1889-1903 . Charles booth was a Liverpudlian philanthropist. His maps were created through direct observations, and partitioned London into a series of summarising groups which are available to view online.

  • For more on the history of geodmeographics in the US and the UK, see our other open access paper on the subject:

Singleton, A. and Spielman, S. (2013). The Past, Present and Future of Geodemographic Research in the United States and United Kingdom. Professional Geographer, 66(4), 558-567.

A response to Mike Hulme’s “Climate and its changes: a cultural appraisal”

By Werner Krauss, University of Hamburg, Germany

A cultural appraisal of climate and its changes is more than only adding social sciences and humanities to climate research; it fundamentally changes the concept of climate change and, as a consequence, the nature of climate politics. For a long time, culture has been considered as the object of analysis for social sciences as their contribution to successfully implementing science-based climate policies. But Mike Hulme (2015) reminds us in a friendly fashion that climate is more than the statistics of average weather or a system of interconnected spheres and global thresholds. For him, climate is first and foremost an idea that helps to stabilise the relationships between cultures and weather, with climate change as the latest step in the cultural evolution of this idea. His approach fundamentally differs from the conception of global climate politics framed by planetary boundaries and aiming at stabilising climate at 2-or less degrees above preindustrial levels; his cultural appraisal suggests an alternative to the regime of experts and the fantasies about the magic of big data and technological solutions.

The anthropologist Melissa Leach once coined in an interview with the Guardian (2007) the drastic term “bullshit research”. She conducted ethnographic research in hot spots of environmental and climate change in Africa, and there the reality she encountered differed profoundly from scientific scenarios. Explaining the causes of drought, of migration or conflict as a result of climate change was more often than not plain wrong; the causes were complex, the scientific attributions were prematurely drawn from model calculations and not based on empirical evidence. Together with James Fairhead, in their book Misreading Landscapes (1996), they documented how scientists and environmentalists had interpreted desertification in the savannah as a result of deforestation by the indigenous population; in reality, it was indigenous people who had planted the existing trees to fight desertification. This case is not unique, and in many case studies, ethnographers find complex realities instead of simple and mono-dimensional explanations like water- or climate-wars.

While Mike Hulme’s article focuses on making a general argument for introducing culture into the debate about a changing climate, there remains the question of what a no-bullshit research agenda might look like. I doubt that “culture” is an appropriate entity for research; while it makes sense to say that “other cultures” have differing concepts of the culture-weather nexus from our “modern” ones, it is impossible to single out specific cultures as consistent and autonomous, even less to delineate a geographical space identical with cultures (even though in climate research, outdated conceptions like culture areas or climate determinism come to life again). But how to conduct climate research and avoid the pitfalls of current top-down conceptions?

In his article, Mike Hulme introduces the concept of landscape to illustrate the “dyad of climate-culture”. Landscapes are far more than visual or aesthetic representations, nor are they static formations frozen in time and space. Instead, they are social practices and designate the process of making space. They are the result of the interaction of nature, culture and history, but also of symbols, perceptions and imaginaries; or, in the terms of Latour’s actor-network-theory, they are networks animated by human and non-human interactions. It is here where we can observe and critically analyse the transition from land- into climate-scapes, with climate politics as one of the main drivers. Landscapes are political assemblies where matters of concern are decided, such as questions concerning property, access to land or weather-related issues like coastal protection or the transition of former rural areas into emerging energy landscapes. To manage landscapes successfully needs the consent of those who inhabit, shape and administer them; only then, climate change indeed means the “re-negotiation of cultural relationships between humans and their changing weather”, and climate change finally becomes an emergent form of life (Callison 2014).

Thus, Mike Hulme indeed offers an approach to climate change that profoundly differs from the current science-based understanding. There is more to his cultural appraisal than simply adding social sciences and humanities to climate science; the question is about differing ideas of governance, of democracy and about power relations, inside science and in the relation between science, politics and society. Conflicts and frictions are unavoidable where expert regimes rub with societies and cultures; instead of dreaming the impossible dream of stabilizing climate, a cultural appraisal of climate offers insight into the potential of specific landscapes to deal with changing climates.

About the author:

Werner Krauss is currently a fellow at the Cluster of Excellence “CliSAP” (Integrated Climate System Analysis and Prediction), University of Hamburg, project “Understanding science in interaction” (USI). As a cultural anthropologist, his main focus of research is on human-environment relationships, the anthropology of landscapes and heritage, and climate change. He is an editor of the climate blog Die Klimazwiebel.

References:

Callison, Candis (2014) How Climate Change Comes to Matter: the Communal Life of Facts. Duke University Press.

Fairhead, J. and M. Leach (1996) Misreading the African Landscape: Society and Ecology in a Forest-Savanna Mosaic. Cambridge University Press.

The Guardian (2007) Melissa Leach: The Village Voice. http://www.theguardian.com/education/2007/jul/17/highereducationprofile.academicexperts (accessed 07/17/2015).

Hulme, Mike (2015) Climate and its changes: a cultural appraisalGeo: Geography and Environment, doi: 10.1002/geo2.5

Response to Leonelli et al (2015): Thinking About “Open” Science

By James Porter, University of Leeds, UK

As a research community we’re being urged to “open” science up like never before. Whether it’s our research results, methods used to make sense of them, or even the underlying raw data itself, everything we do should be made freely and easily accessible to the widest variety of people possible, in the widest variety of ways. Already great strides have been made. As Leonelli et al (2015) note, we’ve seen the push towards “open” access of published research results; “open” data deposited in repositories; and “open” source licenses for research materials (e.g. codes, models etc). All of this edges us closer to the ethos behind “open” science or Science 2.0. That is, to encourage greater equality, widen participation, and stimulate innovation.

Indeed, “open” science has already been heralded as a success. It’s helped scientists find answers to decade old problems. Scientists at the University of Washington struggling to discover the structure of a protein that helps HIV multiply, turned to developers of Foldit, for example. As an online game, players are asked to rearrange the protein to find its most stable configuration, likely to be it’s natural form. Within three weeks over 57,000 players had arrived at an answer, which was published in Nature Structural. None of this would have been possible if that research had remained hidden from public view behind journal subscriptions or locked away in our ivory towers.

It’s somewhat ironic, then, that we’re being asked to make things “open” yet constantly reminded to refrain from sharing our findings prematurely. This is due in no small part to a prevailing institutional culture of publish or perish (i.e. REF); the creep to commercialise science and lockdown intellectual property or block rivals (e.g. OncoMouse); and concerns over allowing others to cast doubt or breed misunderstandings (i.e. UEA leaked emails). How science is opened up so that it’s usable and useful, not just available; who should be doing it – early career researchers or established professors; and when research is released – before/after publication; are all tricky questions that researchers must grapple with today. “Sticks” and “carrots”, as Leonelli et al (2015) argue, may incentivise “open” science but it’s unlikely to fully succeed unless the underlying institutional and social norms/values governing research are addressed as well. Many of these institutional and epistemic norms touch on the changing spaces of science engaged by geographers.

The UK government, for instance, has set the Met Office on a course for “open” science. In a pointed rebuttal to critics who claim that it has stifled innovation through a monopoly over meteorological and climate data, the Met Office is set to “open” things up. The once fine distinction between data used for non-commercial purposes and commercial ones is no more. Today, a new policy breaks data into one of three categories (open, research and managed), which dictates who can access it and what they can do with it (not everyone can be trusted, apparently). Making the data fit into these categories ignores its hybrid, contested, and evolving nature, where it may start life as one thing but over time change as more things are added. Efforts to make the data manageable not only reflect politics to do with their construction and circulation but also reflect the tension faced by the Met Office to give away and make money from its data/services.

Much of the logic behind the “open” science movement shares similarities with neoliberal thinking. Will making raw data freely available via repositories reduce inequalities between the data-rich and data-poor, or simply allow those with the resources, capacity and infrastructures to increase them? Will the ability to reproduce, verify, and challenge research results bolster the status of science, a la Robert Merton, or make it harder to differentiate rigorous science from junk science, making it easier to sell for PR purposes? And does opening up research results, data and materials, constitute a valuable endeavour in itself, or one that’s only realisable when equipped with the right expertise?

Yes “open” science is certainly welcome in exposing a whole raft of cultural practices (and politics) we take for granted in academia today and helps us respond to the needs of the twentieth-first century. But before we fully embrace “open” science we need to think critically about its politics. Critical scholars have told us time and again how neoliberalism worsen inequalities, reduces participation, and restricts innovation to only marketable products/services. We need to ask for “whom” is science being opened, how “democratic” is that process, and of course what deep-seated politics are being advanced as things get opened-up? These are issues Leonelli has raised in relation to biology in the Bulletin of Science, Technology & Society, but tracing these unfolding dynamics in relation to geographical data and in open access journals like Geo is up to all of us.

About the author: 

Dr James Porter is a Research Fellow in the School of Earth and Environment at the University of Leeds. James’ work specialises in how institutional politics shapes the production, and in turn, use of environmental knowledge for policy, through the lens of science and technology studies (STS) and the management of risk/uncertainty.

References:

Leonelli, S. et al. (2015) Sticks and Carrots: Encouraging Open Science at its sourceGeo: Geography and Environment, doi: 10.1002/geo2.2.

Reflexion: Does the logic of the University sector allow space for Open Science? A response to Leonelli et al.

By George Adamson, King’s College London, UK

How does a researcher gain legitimacy? Within the UK context legitimacy is increasingly informed by the 6-yearly Research Excellence Framework (REF) exercise, which drives departmental funding. Researchers must demonstrate entrepreneurial innovativeness, international relevance, and situate the wider relevance of their research against a shifting definition of ‘impact’, as well as being able to demonstrate the ability to attract and maintain a satisfied student body. In a hyper-competitive academic market, such neoliberal codes of success are increasingly important. The Open Access issue must be considered within this context.

The fields of historical- and palaeoclimatology (my own disciplines) have made large strides towards the kind of open access described in the paper by Sabina Leonelli and colleagues, recently published in Geo (Leonelli et al, 2015). Web-portals such as the National Climatic Data Center  provide a repository for the results of published climate reconstructions. Further moves are being made towards the establishment of repositories of raw data, particularly narrative information from sources such as diaries, personal correspondence and government reports. Such descriptions of meteorological variability and climate-related phenomena and activities can be used for both quantitative reconstructions of climate in the past and for a multitude of perspectives on human-environment relationships. The ongoing ACRE (Atmospheric Circulation Reconstructions over the Earth) project, run from the UK Met Office, are envisaging a dynamical global 4-dimensional database of historical weather that incorporates data from state-of-the-art reanalysis through to cultural interpretations of climate. This is in addition to existing databases such as Euro-Climhist  and the tambora.org archive (the climate and environmental history collaborative research environment).

Such approaches are important for encouraging the cross-disciplinary work that is increasingly recognised as necessary within the field of climate change research (Hulme 2011). Online repositories also allow for a public ownership of climate data, an endeavour that can be at times frustrating, given the ways that climate data are used by some elements for personal attacks on climate scientists. This is not to say that such endeavours are imprudent. The sharing of climate data should ultimately break down, rather than reinforce disagreements. Citizen science projects such as Old Weather (oldweather.org) take public ownership even further, with non-academics actively involved in the data management process.

Such ownership, however, can only really be partial. The institutional culture outlined above creates huge pressure to analyse, interpret and publish before any data is shared. Departments, competing for ‘world class’ research outputs, are reluctant to relinquish ownership of data before outputs are be generated. The goal of ‘research for all’, as witnessed from outside the academy, is at odds with this individualised logic within. This is a long way from Science 2.0, at least within geography. To reach the point where science can be undertaken in collaboration with any interested parties would require a paradigm shift in the way that Universities are run and what is prioritised, something which could have been given more emphasis in the paper (Leonelli et al, 2015). The ongoing implications of the 2012 European Commission Recommendation (EC 2012), which recommends a fundamental change in the way academic careers are evaluated to include data-sharing, will therefore be interesting to follow. Such a cultural shift would not be unwelcome.

About the author: 

Dr George Adamson is Lecturer in Geography at King’s College London. George’s research is situated at the interface between palaeoclimatology, environmental history and climate change adaptation and policy.

References:

European Commission (2012) Recommendation on access to and preservation of scientific information. Accessed on 12 November 2014

Hulme, M. (2011) Meet the humanities. Nature Climate Change 1 177-179

Leonelli, S. et al. (2015) Sticks and Carrots: Encouraging Open Science at its source. Geo: Geography and Environment, doi: 10.1002/geo2.2.

Co-producing openness

A couple of weeks ago I spent at a few days at the annual conference of the RGS-IBG, where nearly 2,000 delegates had gathered to discuss a huge variety of ideas and interpretations linked to the conference theme of ‘geographies of co-production’ – a theme selected by conference chair Wendy Larner which had clearly resonated with the geography community. One of the highlights for me was attending the first meeting of Geo’s editorial board, and the journal had a strong presence throughout the conference both through sponsored events, promotional literature, and more informal discussions between colleagues about the meaning and practice of open-access publishing.

During the conference a number of different meanings of ‘co-production’ were doing the rounds, and here I just want to reflect on two. Firstly, co-production can refer quite straightforwardly to ‘doing things together’ – in the case of this conference, much reflection centred around the practices and politics of collaborative knowledge making which challenges conventional hierarchies of expertise and institutional boundaries. How academics publish their work of course has many implications for the co-production of knowledge. If the ‘subjects’ of research are re-cast as co-producers of research, then access to the finished products takes on an extra set of ethical imperatives. Is it right to co-produce knowledge with, for example, non-academics, only for that knowledge to then be trapped in the circuits of academic journals and institutional subscriptions?

Many at the conference argued that knowledge is itself always-already co-produced. Knowledge does not just spring forth from individual brilliance in thought or research, but rather is something with multiple authors, historically textured with many unacknowledged sources, and gathered through experience and encounter as much as through methodical ‘research’. Who, then, does the knowledge contained in an academic journal article belong to? To whom should it be accessible, and accountable?

These are the kinds of arguments that have been animating the debate about open-access publishing. Many feel an ethical imperative to open-up their research to broader publics both in its production and its dissemination. Open-access publishing can therefore rightly be considered one process of co-production – of unsettling conventional boundaries, be they epistemic, institutional or economic.

Another reading of ‘co-production’ which was gaining attention at the conference was that developed in the field of science & technology studies (STS) – a discipline whose core concern is the way in which human societies make and use authoritative knowledge. In this field (forgive the generalisation), ‘co-production’ is taken to mean not necessarily people doing things together, but rather the mutual structuring of society and its forms of knowledge-making. The argument is that the way in which we make sense of the world is co-produced with the way in which we live in the world. Knowledge is at least partly a social thing, reflecting social structures and concerns. But knowledge also in turn influences those social structures. So people in STS, like Sheila Jasanoff, talk about the ‘co-production of science and social order’, to refer to the way in which scientific change and social change occur together.

With this particular co-productionist hat on, we might start to think about open-access publishing as a site of co-production. It is a site in which new structures of knowledge-making are emerging, at the same time as new social relations are being developed. Open-access publishing is a site where much broader struggles about the relationship between the academy and broader society are being played out. Open-access journals like Geo have a radically different funding structure, with the costs of production not covered by readers but by contributors, or in practice from research funding bodies or the authors’ academic institution. So the power structures which are inherent in any form of knowledge production and dissemination are changing.

The move to openness in the academy parallels, and is arguably informed by, broader moves towards initiatives like open data and so-called ‘open policy-making’ in government. Notions and practices of accountability are also in flux, with visual metaphors such as ‘openness’ promising new forms of democratic space. Technologies like the internet alter the possibilities and meaning of ‘openness’ in relation to powerful institutions, potentially creating new lines of visibility where before there may have been none. Yet STS also encourages us to practice analytical symmetry and therefore to ask whether and where processes of ‘opening up’ may be accompanied by processes of ‘closing down’.

So open-access publishing is part of the process of co-producing knowledge. It is also a site where the broader co-production of knowledge and social order is taking place. Relationships between publishers, authors, readers, funders, the academy, the state and broader publics – all of these are up for grabs and under negotiation in a variety of ways. This demands constant sensitivity to the multiple, dispersed effects of an initiative like open-access publishing. Geo is just one experimental site where new social relations are being explored, reflected upon, and practiced. Being involved in the journal from the outset as a member of the editorial board is thus not only a great opportunity to see these processes in action. It’s also an opportunity to, in a small way, help shape them.

Martin Mahony