Tagethics

Affect and Artificial Intelligence and The Fetish Revisited

Elizabeth A Wilson’s Affect and Artificial Intelligence traces the history and development of the field of artificial intelligence (AI) in the West, from the 1950’s to the 1990’s and early 2000’s to argue that the key thing missing from all attempts to develop machine minds is a recognition of the role that affect plays in social and individual development. She directly engages many of the creators of the field of AI within their own lived historical context and uses Bruno Latour, Freudian Psychoanalysis, Alan Turning’s AI and computational theory, gender studies,cybernetics, Silvan Tomkins’ affect theory, and tools from STS to make her point. Using historical examples of embodied robots and programs, as well as some key instances in which social interactions caused rifts in the field,Wilson argues that crucial among all missing affects is shame, which functions from the social to the individual, and vice versa.

[Cover to Elizabeth A Wilson’s Affect and Artificial Intelligence]

J.Lorand Matory’s The Fetish Revisited looks at a particular section of the history of European-Atlantic and Afro-Atlantic conceptual engagement, namely the place where Afro-Atlantic religious and spiritual practices were taken up and repackaged by white German men. Matory demonstrates that Marx and Freud took the notion of the Fetish and repurposed its meaning and intent, further arguing that this is a product of the both of the positionality of both of these men in their historical and social contexts. Both Marx and Freud, Matory says, Jewish men of potentially-indeterminate ethnicity who could have been read as “mulatto,” and whose work was designed to place them in the good graces of the white supremacist, or at least dominantly hierarchical power structure in which they lived.

Matory combines historiography,anthropology, ethnography, oral history, critical engagement Marxist and Freudian theory and, religious studies, and personal memoir to show that the Fetish is mutually a constituting category, one rendered out of the intersection of individuals, groups, places, needs, and objects. Further, he argues, by trying to use the fetish to mark out a category of “primitive savagery,” both Freud and Marx actually succeeded in making fetishes of their own theoretical frameworks, both in the original sense, and their own pejorative senses.
Continue reading

Selfhood, Coloniality, African-Atlantic Religion, and Interrelational Cutlure

In Ras Michael Brown’s African-Atlantic Cultures and the South Carolina Lowcountry Brown wants to talk about the history of the cultural and spiritual practices of African descendants in the American south. To do this, he traces discusses the transport of central, western, and west-central African captives to South Carolina in the seventeenth and eighteenth centuries,finally, lightly touching on the nineteenth and twentieth centuries. Brown explores how these African peoples brought, maintained, and transmitted their understandings of spiritual relationships between the physical land of the living and the spiritual land of the dead, and from there how the notions of the African simbi spirits translated through a particular region of South Carolina.

In Kelly Oliver’s The Colonization of Psychic Space­, she constructs and argues for a new theory of subjectivity and individuation—one predicated on a radical forgiveness born of interrelationality and reconciliation between self and culture. Oliver argues that we have neglected to fully explore exactly how sublimation functions in the creation of the self,saying that oppression leads to a unique form of alienation which never fully allows the oppressed to learn to sublimate—to translate their bodily impulses into articulated modes of communication—and so they cannot become a full individual, only ever struggling against their place in society, never fully reconciling with it.

These works are very different, so obviously, to achieve their goals, Brown and Oliver lean on distinct tools,methodologies, and sources. Brown focuses on the techniques of religious studies as he examines a religious history: historiography, anthropology, sociology, and linguistic and narrative analysis. He explores the written records and first person accounts of enslaved peoples and their captors, as well as the contextualizing historical documents of Black liberation theorists who were contemporary to the time frame he discusses. Oliver’s project is one of social psychology, and she explores it through the lenses of Freudian and Lacanian psychoanalysis,social construction theory, Hegelian dialectic, and the works of Franz Fanon. She is looking to build psycho-social analysis that takes both the social and the individual into account, fundamentally asking the question “How do we belong to the social as singular?”
Continue reading

Cyborg Theology and An Anthropology of Robots and AI

Scott Midson’s Cyborg Theology and Kathleen Richardson’s An Anthropology of Robots and AI both trace histories of technology and human-machine interactions, and both make use of fictional narratives as well as other theoretical techniques. The goal of Midson’s book is to put forward a new understanding of what it means to be human, an understanding to supplant the myth of a perfect “Edenic” state and the various disciplines’ dichotomous oppositions of “human” and “other.” This new understanding, Midson says, exists at the intersection of technological, theological, and ecological contexts,and he argues that an understanding of the conceptual category of the cyborg can allow us to understand this assemblage in a new way.

That is, all of the categories of “human,” “animal,” “technological,” “natural,” and more are far more porous than people tend to admit and their boundaries should be challenged; this understanding of the cyborg gives us the tools to do so. Richardson, on the other hand, seeks to argue that what it means to be human has been devalued by the drive to render human capacities and likenesses into machines, and that this drive arises from the male-dominated and otherwise socialized spaces in which these systems are created. The more we elide the distinction between the human and the machine, the more we will harm human beings and human relationships.

Midson’s training is in theology and religious studies, and so it’s no real surprise that he primarily uses theological exegesis (and specifically an exegesis of Genesis creation stories), but he also deploys the tools of cyborg anthropology (specifically Donna Haraway’s 1991 work on cyborgs), sociology, anthropology, and comparative religious studies. He engages in interdisciplinary narrative analysis and comparison,exploring the themes from several pieces of speculative fiction media and the writings of multiple theorists from several disciplines.

Continue reading

Bodyminds, Self-Transformations, and Situated Selfhood

Back in the spring, I read and did a critical comparative analysis on both Cressida J. Heyes’ Self-Transformations: Foucault, Ethics, and Normalized Bodies, and Dr. Sami Schalk’s BODYMINDS REIMAGINED: (Dis)ability, Race, and Gender in Black Women’s Speculative Fiction. Each of these texts aims to explore conceptions of modes of embodied being, and the ways the exterior pressure of societal norms impacts what are seen as “normal” or “acceptable” bodies.

For Heyes, that exploration takes the form of three case studies: The hermeneutics of transgender individuals, especially trans women; the “Askeses” (self-discipline practices) of organized weight loss dieting programs; and “Attempts to represent the subjectivity of cosmetic surgery patients.” Schalk’s site of interrogation is Black women speculative fiction authors and the ways in which their writing illuminates new understandings of race, gender, and what Schalk terms “(dis)ability.

Both Heyes and Schalk focus on popular culture and they both center gender as a valence of investigation because the embodied experience of women in western society is the crux point for multiple intersecting pressures.

Continue reading

Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES

[“Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues” was originally published in Social Epistemology Review and Reply Collective 7, no. 2 (2018): 64-69.
The pdf of the article gives specific page references. Shortlink: https://wp.me/p1Bfg0-3US]

[Image of an eye in a light-skinned face; the iris and pupil have been replaced with a green neutral-faced emoji; by Stu Jones via CJ Sorg on Flickr / Creative Commons]

Shannon Vallor’s most recent book, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting takes a look at what she calls the “Acute Technosocial Opacity” of the 21st century, a state in which technological, societal, political, and human-definitional changes occur at such a rapid-yet-shallow pace that they block our ability to conceptualize and understand them.[1]

Vallor is one of the most publicly engaged technological ethicists of the past several years, and much of her work’s weight comes from its direct engagement with philosophy—both philosophy of technology and various virtue ethical traditions—and the community of technological development and innovation that is Silicon Valley. It’s from this immersive perspective that Vallor begins her work in Virtues.

Vallor contends that we need a new way of understanding the projects of human flourishing and seeking the good life, and understanding which can help us reexamine how we make and participate through and with the technoscientific innovations of our time. The project of this book, then, is to provide the tools to create this new understanding, tools which Vallor believes can be found in an examination and synthesis of the world’s three leading Virtue Ethical Traditions: Aristotelian ethics, Confucian Ethics, and Buddhism.

Continue reading

A Discussion on Daoism and Machine Consciousness

Over at AFutureWorthThinkingAbout, there is the audio and text for a talk for the  about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male:

My starting positions, here, are that, 1) in order to do the work correctly, we literally must refrain from resting in abstraction, where, by definition, the kinds of models that don’t seek to actually engage with the people in question from within their own contexts, before deciding to do something “for someone’s own good,” represent egregious failure states. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.” I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.

[An image of a traditional Yin-Yang carved in a silver ring]

2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”

For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.

The  overarching project of training a machine learning program and eventual AI will require engagement with religious texts (a very preliminary take on this has been taken up by Rose Eveleth at the Flash Forward Podcast), but also a boarder engagement with discernment and decision-making. Even beginning to program or code for this will require us to think very differently about the project than has thus far been in evidence.

Read or listen to the rest of A Discussion on Daoism and Machine Consciousness at A Future Worth Thinking About

On Adaptable Modes of Thought

This piece originally appeared at A Future Worth Thinking About

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.

Completely.

Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.

-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.

1528_injection_splash

Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.

-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.


If you liked this article, consider dropping something into the  Technoccult & A Future Worth Thinking About Tip Jar

A Conversation With Klint Finley About AI and Ethics

I spoke with Klint Finley, known to this parish, over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, yesterday, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are increasingly willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, above.

Dalai Lama Says Religion Is No Longer Sufficient For Ethics

Dalai Lama

Via io9, here’s what … wrote on Facebook according to io9:

All the world’s major religions, with their emphasis on love, compassion, patience, tolerance, and forgiveness can and do promote inner values. But the reality of the world today is that grounding ethics in religion is no longer adequate. This is why I am increasingly convinced that the time has come to find a way of thinking about spirituality and ethics beyond religion altogether.

io9: Dalai Lama tells his Facebook friends that religion “is no longer adequate”

The Dalai Lama has been saying he hopes for a woman to succeed him and has also said it’s possible he will have no successor.

Photo by Luca Galuzzi / CC

Study: Atheists ‘just as ethical as churchgoers’

there's probably no God

People who have no religion know right from wrong just as well as regular worshippers, according to the study.

The team behind the research found that most religions were similar and had a moral code which helped to organise society.

But people who did not have a religious background still appeared to have intuitive judgments of right and wrong in common with believers, according to the findings, published in the journal Trends in Cognitive Sciences.

Dr Marc Hauser, from Harvard University, one of the co-authors of the research, said that he and his colleagues were interested in the roots of religion and morality.

Telegraph: Atheists ‘just as ethical as churchgoers’

(via Religion News)

© 2024 Technoccult

Theme by Anders NorénUp ↑