One of the things I’m did this past spring was an independent study—a vehicle by which to move through my dissertation’s tentative bibliography, at a pace of around two books at time, every two weeks, and to write short comparative analyses of the texts. These books covered intersections of philosophy, psychology, theology, machine consciousness, and Afro-Atlantic magico-religious traditions, I thought my reviews might be of interest, here.
My first two books in this process were Frantz Fanon’s Black Skin, White Masks and David J. Gunkel’s The Machine Question, and while I didn’t initially have plans for the texts to thematically link, the first foray made it pretty clear that patterns would emerge whether I consciously intended or not.
[Image of a careworn copy of Frantz Fanon’s BLACK SKIN, WHITE MASKS, showing a full-on image of a Black man’s face wearing a white anonymizing eye-mask.]
In choosing both Fanon’s Black Skin, White Masks and Gunkel’s The Machine Question, I was initially worried that they would have very little to say to each other; however, on reading the texts, I instead found myself struck by how firmly the notions of otherness and alterity were entrenched throughout both. Each author, for very different reasons and from within very different contexts, explores the preconditions, the ethical implications, and a course of necessary actions to rectify the coming to be of otherness.
Over at AFutureWorthThinkingAbout, there is the audio and text for a talk for the about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male:
My starting positions, here, are that, 1) in order to do the work correctly, we literally must refrain from resting in abstraction, where, by definition, the kinds of models that don’t seek to actually engage with the people in question from within their own contexts, before deciding to do something “for someone’s own good,” represent egregious failure states. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.” I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.
[An image of a traditional Yin-Yang carved in a silver ring]
2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”
For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.
The overarching project of training a machine learning program and eventual AI will require engagement with religious texts (a very preliminary take on this has been taken up by Rose Eveleth at the Flash Forward Podcast), but also a boarder engagement with discernment and decision-making. Even beginning to program or code for this will require us to think very differently about the project than has thus far been in evidence.
I spoke with Klint Finley, known to this parish, over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, yesterday, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world.
This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are increasingly willing to have this discussion, at all.
To see my comments and read the rest of the article, click through, above.