Tagaudio

A Discussion on Daoism and Machine Consciousness

Over at AFutureWorthThinkingAbout, there is the audio and text for a talk for the  about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male:

My starting positions, here, are that, 1) in order to do the work correctly, we literally must refrain from resting in abstraction, where, by definition, the kinds of models that don’t seek to actually engage with the people in question from within their own contexts, before deciding to do something “for someone’s own good,” represent egregious failure states. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.” I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.

[An image of a traditional Yin-Yang carved in a silver ring]

2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”

For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.

The  overarching project of training a machine learning program and eventual AI will require engagement with religious texts (a very preliminary take on this has been taken up by Rose Eveleth at the Flash Forward Podcast), but also a boarder engagement with discernment and decision-making. Even beginning to program or code for this will require us to think very differently about the project than has thus far been in evidence.

Read or listen to the rest of A Discussion on Daoism and Machine Consciousness at A Future Worth Thinking About

Theorizing the Web 2017: “Apocalypse Buffering”

Over at A Future Worth Thinking About, I’ve posted an expanded riff on a presentation I gave at the Theorizing the Web 2017 Invited Panel, “Apocalypse Buffering.” It’s called “How We Survive After The Events.” If you’re a regular reader of the newsletter, then this will likely be familiar.

[Black lettering on a blue field reads “Apocalypse Buffering,” above an old-school hourglass icon.]

My co-panelists were Tim Maughan, who talked about the dystopic horror of shipping container sweatshop cities, and Jade E. Davis, discussing an app to know how much breathable air you’ll be able to consume in our rapidly collapsing ecosystem before you die. Then my piece.

Our moderator, organizer, and all around fantastic person who now has my implicit trust was Ingrid Burrington. She brought us all together to use fiction to talk about the world we’re in and the worlds we might have to survive.

So, if you’ve enjoyed the recent posts, “The Hermeneutics of Insurrection” and “Jean-Paul Sartre and Albert Camus Fistfight in Hell,” then you may want to check this out, too, as all three could probably be considered variations on the same theme.

Mindful Cyborgs: AnxietyBox, Alerts, and Attention with Paul Ford

This week on Mindful Cyborgs we talk with former Harper’s editor and hobbyist programmer Paul Ford about AnxietyBox, a tool he built to help manage his own anxiety.

Download and Show Notes: Mindful Cyborgs: AnxietyBox, Alerts, and Attention with Paul Ford, Part 1

Plus, I’m behind on our releases. It turns out the second part of our interview with Eleanor Saitta, in which we do a deeper dive on Nordic Larp, has been up for a couple weeks:

Data-driven Introspection and Surveillance Culture with Eleanor Saitta

Latest Mindful Cyborgs:

Eleanor Saitta works professionally as a computer security expert, but more generally she “looks at how systems break” – computer, social, infrastructural, legal, and more. She comes on the show today to share a unique perspective on surveillance / security culture that we have found ourselves enmeshed in. Don’t miss this one (or part 2!).

Download and Show Notes: Mindful Cyborgs: Data-driven Introspection and Surveillance Culture with Eleanor Saitta

Mindful Cyborgs: Part Two of our Conversation with Zeynep Tufekci About Algorithms

This time around I also talk a bit about the 15th anniversary of Technoccult and my struggle to find relevancy in blogging in the 2015.

Download and Notes: Mindful Cyborgs: Algorithmic Reverberations on the Feed PART 2

Mindful Cyborgs: Zeynep Tufekci on the Consequence of Algorithms

This week Zeynep Tufekci, an assistant professor in the Department of Sociology at University of North Carolina, Chapel Hill*, talks with us about the implications of algorithmically filtering social media feeds.

Download and Show Notes: Mindful Cyborgs: Algorithmic Reverberations on the Feed

For more on the topic, check out Zeynep’s article on Facebook’s algorithms and Ferguson.

*This was recorded a few weeks ago, well before the recent tragedy in Chapel Hill, so we didn’t discuss that.

Mindful Cyborgs: Digital Dualism and Its Malcontents

In this episode, a conversation about our concerns about where technology takes us back to issues raised in some of the earliest episodes as we talk about the duality of “online” and “offline” and whether our concerns are rooted in technology or society. Also, perhaps a little late, a conversation about why Google Glass was such a bomb.

Download and Show Notes: Mindful Cyborgs: Cyborgian Promise, Cyborgian Perils

Mindful Cyborgs: Coping with Depression in a Time Sick World

This week we talk about the weirdness of being on TV, the early history of the internet, and coping mechanisms for depression. This one gets really personal. Probably our most intense episode yet.

Download and Show Notest: Mindful Cyborgs: Dark Nights and the Ghosts of Tech’s Past

Mindful Cyborgs: Your Digital Life After Your Death 2

The second part of our conversation with Willow Brugh of the MIT Media Lab about the Networked Mortality project and their efforts to help you figure out what to do with all your digital stuff when you die.

Download and Full Transcript: Mindful Cyborgs: Color Coding for Sex and Death PART 2

Mindful Cyborgs: Your Digital Life After Your Death

This week we talk with Willow Brugh of the MIT Media Lab about what happens to all your digital “stuff” when you die. How will your co-workers get the last of your uncompleted work? What will happen to your Facebook page? Who will delete your porn folder? Willow talks about all that and more.

Download and Show Notes: Mindful Cyborgs: Color Coding for Sex and Death

© 2024 Technoccult

Theme by Anders NorénUp ↑