A dialogue between ontology and epistemology

An announcement concerning the upcoming EARLI Conference hosted at the Faculty of Education:

You may be aware of a conference taking place here, at the Faculty of Education, on August 27th-28th 2018. This is being co-organised by two special interest groups (‘Methods in Learning Research’ [SIG17] and ‘Educational Theory’ [SIG25]) of the European Association of Research on Learning and Instruction (EARLI). The theme of the conference is ‘Dialogue between ontology and epistemology: New perspectives on theory and methodology in research on learning and education’.

The response to this conference has been extremely positive, with 84 papers or workshops being accepted after peer-review. The two keynote speakers are Susan Robertson (Faculty of Education) and Martyn Hammersley (Open University). You can find the preliminary programme here. ‘Early Bird’ registration for the main conference is still possible until June 30th (seehttp://theoryandmethods.com/register/).

Note in particular, on Sunday 26th August, several pre-conference workshops are taking place:

Local colleagues and doctoral students are invited to participate in these workshops even if they do not register for the main conference (JURE members – free of charge / All others €10). In order to sign up for a workshop(s), please first contact the organisers (see here for the contact info) to see if there is space in the workshop of interest, before registering for the pre-conference here.

Human agency beyond platform structuralism and platform voluntarism

By Mark Carrigan

In the last year, I find myself obsessing ever more fequently about agency and platforms. Given I spent six years writing a PhD about human agency, it is inevitable that this would be the lens I bring to the analysis of platforms. But it also reflects a sustained weakness in how the role of agency in platforms is conceptualised, as well as the political implications which are seen to flow from this. It is one which we can make sense of using Margaret Archer’s notion of conflation, developed to make sense of how different theorists have sought to solve the problem of structure and agency.

I want to suggest we can find a fundamental ambiguity about platforms which plays out at both political and ontological levels. This ambiguity reflects a failure to make sense of how platforms exercise a causal influence over human beings and how human beings exercise a causal influence over platforms. Platform structuralism takes many forms but it fundamentally sees human behaviour as moulded by platforms, leveraging insights into the social, psychological and/or neuro constitution of human beings to condition their behaviour in predictable and explicable ways. It takes the platform as the horizon of human action, framing human beings as responding to the incentives and disincentives to be found within its architecture. It is often tied to a politics which sees platforms as generating pathological, even addictive, behaviours. It conflates downwards and takes agency as an epiphenomenon of (platform) structure.

Critiques informed by platform structuralism often seem to have put their finger on something important, while remaining overstated in a way that is hard to pin down specifically. My suggestion is this overstatement is a failure to come to terms with the fundamental relation between the platform and the user. How do platforms exercise a causal influence over their users? Their interventions are individualised in a statistical way, rather than a substantive one. These are instruments which are simultaneously precise yet blunt. While they might be cumulatively influential, particular instances are liable to be crude and ineffective, often passing unnoticed in the life of the user. For this reason we have to treat the causal powers of platforms over their users extremely careful. It is also something which varies immensely between platforms and the ontology of platforms designed for multi-sided markets is a more complex issue for another post.

Platform voluntarism is often a response to the overstatement of platform structuralism. Denying the capacity of platforms to mould their users, platforms are framed as simply providing incentives and disincentives, able to be ignored by users as readily as they are embraced. The platform is simply a stage upon which actors act, perhaps facilitating new categories of action but doing nothing to shape the characteristics of the agents themselves. It conflates upwards, treating platform (structure) as a straight forward expression of the aggregate intentions of their users. Both platform voluntarism and platform structuralism tend to reify platforms, cutting them off in different ways from both users and the wider social context in which they use. What gets lost is human agency and the ways in which these infrastructures shape and are shaped by human agents.

Another reason it is so crucial to retain agency as a category is because these platforms are designed in purposive ways. Unless we have an account of how they have the characteristics they do because people have sought to develop them in specific ways, we risk lapsing into a form of platform structuralism which we take platforms as an a priori horizon within which human beings act. They are simply given. We might inquire into the characteristics of platforms in other capacities, including as business models, but we won’t link this to our account of how platforms conditions the social action of users taking place within and through them. We will miss the immediate reactivity of platforms to their users, as well as the many human, rather than merely algorithmic, mechanisms at work. But more broadly, we will take the conditioning influences as a given rather than as something to be explained. In such a case, we treat user agency and engineering agency as unrelated to each other and fragment a phenomenon which we need to treat in a unified way.

If we want to draw out these connections, it becomes necessary to understand how engineers design platforms in ways encoding an understanding of users and seeking to influence their action. If we can provide thick descriptions of these projects, capturing the perspective of engineers as they go about their jobs, it becomes much easier to avoid the oscillation between platform structuralism and platform voluntarism. Central to this is the question of how platform engineers conceive of their users and how they act on these conceptions. What are the vocabularies through which they make sense of how their users act and how their actions can be influenced? Once we recover these considerations, it becomes harder to support the politics which often flows from platform structuralism. As Jaron Lanier writes on loc 282 of his Ten Arguments for Deleting Your Social Media Accounts Right Now:

There is no evil genius seated in a cubicle in a social media company performing calculations and deciding that making people feel bad is more “engaging” and therefore more profitable than making them feel good. Or at least, I’ve never met or heard of such a person. The prime directive to be engaging reinforces itself, and no one even notices that negative emotions are being amplified more than positive ones. Engagement is not meant to serve any particular purpose other than its own enhancement, and yet the result is an unnatural global amplification of the “easy” emotions, which happen to be the negative ones.

He suggests we must replace terms like “engagement” with terms like “addiction” and “behavior modification”. Only then can we properly confront the political ramifications of this technology because our description of the problems will no longer be sanitised by the now familiar discourse of Silicon Valley. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome the contrasting tendencies towards platform structuralism and platform voluntarism. But this political vocabulary would be unhelpful for sociological analysis because it takes us further away from the lifeworld of big tech. It is only if we can establish a rich understanding of the agency underlying the reproduction and transformation of platforms that we can overcome conflationism in our approach to platforms.

Talk: the Idea of a Digital University, David Berry, 12 June 4-5.30PM

12 June, 4-5.30pm, Faculty of Education, 184 Hills Rd., Donald Macintyre Building, Room 2S3 (second floor)

In this talk, I set out to examine the ways in which the university, as an idea, was discussed, written about and actively debated over a long period of history.  I aim to develop a set of critical research questions and problematics in relation to the university, and also to reassemble a set of concepts for thinking about the university in a digital age. When and why the question of the “idea of a university” becomes important? Are there particular historical patterns or social conflicts that generate the conditions for the questioning of the university? Why has the university become such an important site of criticism today?

I also think it is important to ask who it is that is thinking about the idea of a university in each period, as this is, I think, another important aspect to explain both the specificity of the questioning, but also the kinds of answers that are generated in each historical period. Lastly, I want to highlight that asking the question of the idea of the university is important for another reason, and that is that it brings to the fore moments when the university itself is under contestation, whether by the academics and staff that inhabit it, by the state, or from other social forces that may create the conditions for the university’s radical reconfiguration.

David M. Berry is Professor of Digital Humanities at the University of Sussex, Visiting Fellow at CRASSH and Wolfson College, Cambridge, and an associate member of the Faculty of History, University of Oxford.

His most recent books were Critical Theory and the Digital and Digital Humanities: Knowledge and Critique in a Digital Age (with Anders Fagerjord).

All are welcome. The Faculty of Education is about 15′ cycle and 30′ walk from Central Cambridge, and 10′ from Cambridge train station.

Donald Macintyre Building is fully accessible.

For questions about the seminar, contact Jana Bacevic (jb906@cam.ac.uk).

 

Call: Moral Machines: Ethics and Politics of the Digital World

This symposium might be of interest to those within our cluster who took part in the platform capitalism reading group:

 

CALL FOR PAPERS:

MORAL MACHINES? THE ETHICS AND POLITICS OF THE DIGITAL WORLD

6-8 March 2019, Helsinki Collegium for Advanced Studies, University of Helsinki

With confirmed keynotes from N. Katherine Hayles (Duke University, USA) and Bernard Stiegler (IRI: Institut de Recherche et d’Innovation at the Centre Pompidou de Paris)

As our visible and invisible social reality is getting increasingly digital, the question of the ethical, moral and political consequences of digitalization is ever more pressing. Such issue is too complex to be met only with instinctive digiphilia or digiphobia. No technology is just a tool, all technologies mark their users and environments. Digital technologies, however, mark them much more intimately than any previous ones have done since they promise to think in our place – so that they do not only enhance the homo sapiens’ most distinctive feature but also relieve them from it. We entrust computers with more and more functions, and their help is indeed invaluable especially in science and technology. Some fear or dream that in the end, they become so invaluable that a huge Artificial Intelligence or Singularity will take control of the whole affair that humans deal with so messily.

The symposium “Moral Machines? The Ethics and Politics of the Digital World” welcomes contributions addressing the various aspects of the contemporary digital world. We are especially interested in the idea that despite everything they can do, the machines do not really think, at least not like us. So, what is thinking in the digital world? How does the digital machine “think”? Our both confirmed keynote speakers, N. Katherine Hayles and Bernard Stiegler, have approached these fundamental questions in their work, and one of our aims within this symposium is to bring their approaches together for a lively discussion. Hayles has shown that, for a long time, computers were built with the assumption that they imitate human thought – while in fact, the machine’s capability of non-embodied and non-conscious cognition sets it apart from everything we call thinking. For his part, Bernard Stiegler has shown how technics in general and digital technologies in particular are specific forms of memory that is externalized and made public – and that, at the same time, becomes very different from and alien to individual human consciousness.

We are seeking submissions from scholars studying different aspects of these issues. Prominent work is done in many fields ranging from philosophy and literary studies to political science and sociology, not forgetting the wide umbrella of digital humanities. We hope that the symposium can bring together researchers from multiple fields and thus address the ethics and politics of the digital world in an interdisciplinary and inspiring setting. In addition to the keynotes, our confirmed participants already include Erich Hörl, Fréderic Neyrat and François Sebbah, for instance.

We encourage approaching our possible list of topics (see below) from numerous angles, from philosophical and theoretical to more practical ones. For example, the topics could be approached from the viewpoint of how they have been addressed within the realm of fiction, journalism, law or politics, and how these discourses possibly frame or reflect our understanding of the digital world.

The possible list of topics, here assembled under three main headings, includes but is not limited to:

  • Thinking in the digital world:
  • What kind of materiality conditions the digital cognition?
  • How does nonhuman and nonconscious digital world differ from the embodied human thought?
  • How do the digital technologies function as technologies of memory and thought
  • What kind of consequences might their usage in this capacity have in the long run?
  • The morality of machines:
  • Is a moral machine possible?
  • Have thinking machines made invalid the old argument according to which a technology is only as truthful and moral as its human user? Or can truthfulness and morals be programmed (as the constructors of self-driving cars apparently try to do)?
  • How is war affected by new technologies?
  • The ways of controlling and manipulating the digital world:
  • Can and should the digital world be politically controlled, as digital technologies are efficient means of both emancipation and manipulation?
  • How can we control our digital traces and data gathered of us?
  • On what assumptions are the national and global systems (e.g., financial system, global commerce, national systems of administration, health and defense) designed and do we trust them?
  • What does it mean that public space is increasingly administered by technical equipment made by very few private companies whose copyrights are secret?

“Moral Machines? The Ethics and Politics of the Digital World” is a symposium organized by two research fellows, Susanna Lindberg and Hanna-Riikka Roine at the Helsinki Collegium for Advanced Studies, University of Helsinki. The symposium is free of charge, and there will also be a public evening programme with artists engaging the digital world. Our aim is to bring together researchers from all fields addressing the many issues and problems of the digitalization of our social reality, and possibly contribute towards the creation of a research network. It is also possible that some of the papers will be invited to be further developed for publication either in a special journal issue or an edited book.

The papers to be presented will be selected based on abstracts which should not exceed 300 words (plus references). Add a bio note (max. 150 words) that includes your affiliation and email address. Name your file [firstname lastname] and submit it as a pdf. If you which to propose a panel of 3-4 papers, include a description of the panel (max. 300 words), papers (max. 200 words each), and bio notes (max. 150 words each).

Please submit your proposal to moralmachines2019@gmail.com by 31 August 2018. Decisions on the proposals will be made by 31 October 2018.

For further information about the symposium, feel free to contact the organizers Susanna Lindberg (susanna.e.lindberg@gmail.com) and Hanna-Riikka Roine (hanna.roine@helsinki.fi).

The symposium web site: https://blogs.helsinki.fi/moralmachines/.

Critique and Agency in the Accelerated Academy

June 8th, 12pm to 2pm, DMB 2S4
Faculty of Education, Hills Road, Cambridge

In the fifth event in the Accelerated Academy series, the Cultural Politics and Global Justice cluster at the University of Cambridge’s Faculty of Education hosts an afternoon seminar on critique and agency in the accelerated academy. How is temporality changing within the academy? What does this mean for our capacity to individually and collectively shape our working lives? Is there still space for critique within an academy where time pressure has become the norm?

  • Time present and academic futures – Jana Bacevic (Faculty of Education, University of Cambridge)
  • On Critical University StudiesAlison Wood (CRASSH, University of Cambridge)
  • The Coming of the Venture AcademicFilip Vostal (Institute of Philosophy of the Czech Academy of Sciences)

Each speaker will talk for around 20 minutes, with time for questions. We will then open out for a broader discussion of the themes raised during the talks. For information about the Accelerated Academy project, see the website or special section of the LSE Impact Blog.