Tagnonhuman personhood

Affect and Artificial Intelligence and The Fetish Revisited

Elizabeth A Wilson’s Affect and Artificial Intelligence traces the history and development of the field of artificial intelligence (AI) in the West, from the 1950’s to the 1990’s and early 2000’s to argue that the key thing missing from all attempts to develop machine minds is a recognition of the role that affect plays in social and individual development. She directly engages many of the creators of the field of AI within their own lived historical context and uses Bruno Latour, Freudian Psychoanalysis, Alan Turning’s AI and computational theory, gender studies,cybernetics, Silvan Tomkins’ affect theory, and tools from STS to make her point. Using historical examples of embodied robots and programs, as well as some key instances in which social interactions caused rifts in the field,Wilson argues that crucial among all missing affects is shame, which functions from the social to the individual, and vice versa.

[Cover to Elizabeth A Wilson’s Affect and Artificial Intelligence]

J.Lorand Matory’s The Fetish Revisited looks at a particular section of the history of European-Atlantic and Afro-Atlantic conceptual engagement, namely the place where Afro-Atlantic religious and spiritual practices were taken up and repackaged by white German men. Matory demonstrates that Marx and Freud took the notion of the Fetish and repurposed its meaning and intent, further arguing that this is a product of the both of the positionality of both of these men in their historical and social contexts. Both Marx and Freud, Matory says, Jewish men of potentially-indeterminate ethnicity who could have been read as “mulatto,” and whose work was designed to place them in the good graces of the white supremacist, or at least dominantly hierarchical power structure in which they lived.

Matory combines historiography,anthropology, ethnography, oral history, critical engagement Marxist and Freudian theory and, religious studies, and personal memoir to show that the Fetish is mutually a constituting category, one rendered out of the intersection of individuals, groups, places, needs, and objects. Further, he argues, by trying to use the fetish to mark out a category of “primitive savagery,” both Freud and Marx actually succeeded in making fetishes of their own theoretical frameworks, both in the original sense, and their own pejorative senses.
Continue reading

Colonialism and the Technologized Other

One of the things I’m did this past spring was an independent study—a vehicle by which to move through my dissertation’s tentative bibliography, at a pace of around two books at time, every two weeks, and to write short comparative analyses of the texts. These books covered intersections of philosophy, psychology, theology, machine consciousness, and Afro-Atlantic magico-religious traditions, I thought my reviews might be of interest, here.

My first two books in this process were Frantz Fanon’s Black Skin, White Masks and David J. Gunkel’s The Machine Question, and while I didn’t initially have plans for the texts to thematically link, the first foray made it pretty clear that patterns would emerge whether I consciously intended or not.

[Image of a careworn copy of Frantz Fanon’s BLACK SKIN, WHITE MASKS, showing a full-on image of a Black man’s face wearing a white anonymizing eye-mask.]

In choosing both Fanon’s Black Skin, White Masks and Gunkel’s The Machine Question, I was initially worried that they would have very little to say to each other; however, on reading the texts, I instead found myself struck by how firmly the notions of otherness and alterity were entrenched throughout both. Each author, for very different reasons and from within very different contexts, explores the preconditions, the ethical implications, and a course of necessary actions to rectify the coming to be of otherness.

Continue reading

Pieces on Machine Consciousness

Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.

I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.

It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).


Read the rest of Nonhuman and Nonbiological Phenomenology at A Future Worth Thinking About


Additionally,  I have another quote about the philosophical and sociopolitical implications of machine intelligence in this extremely well-written piece by K.G. Orphanides at WIRED UK. From the Article:

Williams, a specialist in the ethics and philosophy of nonhuman consciousness, argues that such systems need to be built differently to avoid a a corporate race for the best threat analysis and response algorithms which [will be] likely to [see the world as] a “zero-sum game” where only one side wins. “This is not a perspective suited to devise, for instance, a thriving flourishing life for everything on this planet, or a minimisation of violence and warfare,” he adds.

Much more about this, from many others, at the link.

Until Next Time.

A Discussion on Daoism and Machine Consciousness

Over at AFutureWorthThinkingAbout, there is the audio and text for a talk for the  about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male:

My starting positions, here, are that, 1) in order to do the work correctly, we literally must refrain from resting in abstraction, where, by definition, the kinds of models that don’t seek to actually engage with the people in question from within their own contexts, before deciding to do something “for someone’s own good,” represent egregious failure states. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.” I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.

[An image of a traditional Yin-Yang carved in a silver ring]

2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”

For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.

The  overarching project of training a machine learning program and eventual AI will require engagement with religious texts (a very preliminary take on this has been taken up by Rose Eveleth at the Flash Forward Podcast), but also a boarder engagement with discernment and decision-making. Even beginning to program or code for this will require us to think very differently about the project than has thus far been in evidence.

Read or listen to the rest of A Discussion on Daoism and Machine Consciousness at A Future Worth Thinking About

© 2024 Technoccult

Theme by Anders NorénUp ↑