The article, titled “The weirdest people in the world?”, appears in the current issue of the journal Brain and Behavioral Sciences. Dr. Henrich and co-authors Steven Heine and Ara Norenzayan argue that life-long members of societies that are Western, educated, industrialized, rich, democratic — people who are WEIRD — see the world in ways that are alien from the rest of the human family. The UBC trio have come to the controversial conclusion that, say, the Machiguenga are not psychological outliers among humanity. We are. […]
Others punish participants perceived as too altruistic in co-operation games, but very few in the English-speaking West would ever dream of penalizing the generous. Westerners tend to group objects based on resemblance (notebooks and magazines go together, for example) while Chinese test subjects prefer function (grouping, say, a notebook with a pencil). Privileged Westerners, uniquely, define themselves by their personal characteristics as opposed to their roles in society. […]
The paper argues that either many studies’ conclusions have to be retested on non-WEIRD cultural groups — a daunting proposition in terms of resources — or they must be understood to offer insight only into the minds of rich, educated Westerners.
It’s about forty years since “Future Shock” was published, and it seems to have withstood the test of time. More to the point, the Tofflers’ predictions for how the symptoms would be manifest appear to be roughly on target. They predicted a growth of cults and religious fundamentalism; rejection of modernism: irrational authoritarianism: and widespread insecurity. They didn’t nail the other great source of insecurity today, the hollowing-out of state infrastructure and externally imposed asset-stripping in the name of economic orthodoxy that Naomi Klein highlighted in The Shock Doctrine, but to the extent that Friedmanite disaster capitalism can be seen as a predatory corporate response to massive political and economic change, I’m inclined to put disaster capitalism down as being another facet of the same problem. (And it looks as if the UK and USA are finally on the receiving end of disaster capitalism at home, in the post-2008 banking crisis era.) […]
I’m going to give it a qualified thumbs-up, for now. Thumbs-up, because religious intolerance is clearly not the answer — but a qualified thumbs-up because I don’t believe we should give a free pass to all religious doctrines in the name of tolerance. Some beliefs can kill, when they are translated into action. They can kill directly, as when the Taliban stones women to death for adultery, or they can kill indirectly, as in the Catholic Church’s opposition to the use of condoms (which makes it harder to prevent the spread of HIV in sub-Saharan Africa, where the disease holocaust is killing two million people a year). We should, in my view, not seek to accommodate those religious doctrines that would impose restrictions on people — especially non-co-religionists — through the force of law. (If you’re a Hassidic Jew and don’t want to eat pork products, that’s fine; campaigning to ban pork products from sale to anyone at all: not so fine. And so on.)
But ultimately, religious doctrines aren’t the source of today’s social problems. The taproots run deeper, and religious extremism is only one manifestation of the underlying problem: widespread future shock. And I’ve got no easy answer to how to deal with it, unless it is to apply a little humanity to our fellow sufferers when we meet them.
I’m not familiar with Leon Wieseltier (whom Alex Pang says he usually dislikes), but I agree with portion of this essay on the mosque that isn’t behind The New Republic’s paywall:
Collective responsibility. One of the most accomplished Jewish terrorists of our time, Baruch Goldstein, came from the Jewish universe in which I was raised. When he committed his crime, there were a few former and present citizens of that universe, a revered rabbi of mine among them, who demanded a stringent communal introspection; but the critics were denounced as slanderers who tarred all of religious Zionism, or all of “Modern Orthodox” Judaism, or all of Judaism, with the same treasonous brush. The killer, we were angrily instructed, was an aberration, and any generalization from his action was an unwarranted imputation of collective responsibility. I disagreed. Baruch Goldstein murdered in the name of Judaism, with an interpretation of Judaism, from a social and intellectual position within Judaism. The same was later true of Yigal Amir. They did not represent the entirety of Judaism, or of the Jewish institutions that formed them—but the massacre in Hebron and the assassination in Tel Aviv were among their effects. If the standpoint of broadly collective responsibility was the wrong way to explain the atrocities, so too was the standpoint of purely individual responsibility. There were currents of culture behind the killers. Their ideas were not only their own. I am reminded of those complications when I hear that Islam is a religion of peace. I have no quarrel with the construction of Cordoba House, but not because Islam is a religion of peace. It is not. Like Christianity and like Judaism, Islam is a religion of peace and a religion of war. All the religions have all the tendencies within them, and in varying historical circumstances varying beliefs and practices have come to the fore. It is absurd to describe the perpetrators of September 11 as “murderers calling themselves Muslims,” as Karen Hughes recently did. They did not call themselves Muslims. They were Muslims. America was not attacked by Islam, but it was also not attacked by Jainism. Mohammed Atta and his band (as well as the growing number of “homegrown” Islamist killers and plotters) represent a real and burgeoning development within Islam, an actualization of one of Islam’s possibilities, an indigenous transnational movement of apocalyptic violence that has brought misery to Muslim societies, and to us. It is not Islamophobic to say so. Quite the contrary: it is to side with Muslims who are struggling against the same poison as we are. Apologetic definitions of Islam will not avail anybody in this struggle.
People keep framing this as a religious freedom issue. But there’s a difference between practicing your religion, which everyone has a right to do, and rubbing your religion in people’s faces as a triumphalist political statement, which is what’s happening here. I’d be interested to know just how bad an insult has to be before it’s no longer protected by the First Amendment. After all, the Second Amendment gives Americans the right to bear arms. But in practice you need a permit to walk around packing hardware, and not everyone can get one despite the Second Amendment.
It is indeed an issue of freedom of religion – and it’s also a freedom of assembly, a freedom of speech, and a property rights question.
Anyway, the intent of Park51 should be applauded because it sets out to do what we, in a civil society, should do when we disagree: have open and peaceful discussions about the issues. Not blowing people up or sending police to buildings and telling the owners what religion they can practice on the premises.
I’m not really interested in splitting hairs of whether Park51 will be a mosque or not, or how close it is to Ground Zero (for the record, it’s really really close to Ground Zero, but I have a hard time calling it a mosque – but I don’t think it’s important). But this essay makes one other important point:
There’s one more catch for the opponents of the so-called Ground Zero mosque: by the same logical leap you can call the Cordoba Center a “mosque,” you can also call Ground Zero as it already exists a giant, open-air mosque. Muslim prayers are already taking place right on the edge of the construction site, and not for world domination. Families are going there to pray — for the souls of the dozens of innocent Muslim victims who died on September 11.
I’ve covered this problem before, but it’s good to see it getting more traction – whether it will do any good remains to be seen.
We often base our opinions on our beliefs, which can have an uneasy relationship with facts. And rather than facts driving beliefs, our beliefs can dictate the facts we chose to accept. They can cause us to twist facts so they fit better with our preconceived notions. Worst of all, they can lead us to uncritically accept bad information just because it reinforces our beliefs. This reinforcement makes us more confident we’re right, and even less likely to listen to any new information. And then we vote.
This effect is only heightened by the information glut, which offers — alongside an unprecedented amount of good information — endless rumors, misinformation, and questionable variations on the truth. In other words, it’s never been easier for people to be wrong, and at the same time feel more certain that they’re right.
Jay Rosen draws attention to the slight difference in behavior between self-identified liberal and conservatives:
The participants who self-identified as conservative believed the misinformation on WMD and taxes even more strongly after being given the correction. With those two issues, the more strongly the participant cared about the topic — a factor known as salience — the stronger the backfire. The effect was slightly different on self-identified liberals: When they read corrected stories about stem cells, the corrections didn’t backfire, but the readers did still ignore the inconvenient fact that the Bush administration’s restrictions weren’t total.
I also thought this was particularly interesting:
A 2006 study by Charles Taber and Milton Lodge at Stony Brook University showed that politically sophisticated thinkers were even less open to new information than less sophisticated types. These people may be factually right about 90 percent of things, but their confidence makes it nearly impossible to correct the 10 percent on which they’re totally wrong.
It’s not all doom and gloom, but I’ll let you read the article for the few rays of optimism. One thing not mentioned in the article: fact checking articles are becoming more popular (but I suppose they might not actually change people’s minds).
NPR covered this today as well, but I was disappointed in the portion of it I heard.
Sometimes we hate being wrong because of the consequences. Mistakes can cost us time and money, expose us to danger or inflict harm on others, and erode the trust extended to us by our community. Yet even when we are wrong about completely trivial matters — when we mispronounce a word, mistake our neighbor Emily for our co-worker Anne, make the dinner reservation for Tuesday instead of Thursday — we often respond with embarrassment, irritation, defensiveness, denial, and blame. Deep down, it is wrongness itself that we hate. […]
As ashamed as we may feel of our mistakes, they are not a byproduct of all that’s worst about being human. On the contrary: They’re a byproduct of all that’s best about us. We don’t get things wrong because we are uninformed and lazy and stupid and evil. We get things wrong because we get things right. The more scientists understand about cognitive functioning, the more it becomes clear that our capacity to err is utterly inextricable from what makes the human brain so swift, adaptable, and intelligent.
Global environmental problems are not, and will not, be mainly a problem of overbreeding Indians or Africans. First, their birthrates are coming down fast, with Indian women, for instance, having fewer than three children on average today; and even African women have falling fertility. And secondly, because overbreeding — in the sense of women having more than replacement levels of children — is almost entirely in countries with a very low per-capita footprint on the planet. For instance, the carbon emissions of one American is the same as that of 20 Indians, 30 Pakistanis, 40 Nigerians and 250 Ethiopians. If, as economists suggest, the world economy will grow by 400 percent by 2050, then no more than a tenth of that will be a result of population growth. The issue is consumption, and that puts the onus right back on the conspicuous consumers to do something about their economic systems, not least before more developing countries follow the same model.
So these worries about overpopulation are unfounded?
When Paul Ehrlich wrote his famous book [“The Population Bomb”], women were having an average around the world of five or six children; now they’re having an average of 2.6. Fertility rates around the world have halved. That’s not just true in Europe and North America; they’re way below replacement levels in most of East Asia now. Not just China but Japan, Korea, Vietnam and Burma have replacement rates of fertility or below. Around the world, fertility rates have been coming down really sharply. So the population bomb as we’ve conceived it before really isn’t there. There’s still population growth going on, but that’s going to stabilize. […]
If chaos theory taught us anything, it’s that societies head off in all kinds of directions we couldn’t predict. Fifty years ago, if we had taken a slightly different path in industrial chemistry and used bromine instead of chlorine, we’d have burned out the entire ozone layer before we knew what the hell was going on, and the world would have been very different. There’s always scary stuff out there that we may not know about. You can’t predict the future. You can just try and plan for it.
Generations, like people, have personalities, and Millennials — the American teens and twenty-somethings who are making the passage into adulthood at the start of a new millennium — have begun to forge theirs: confident, self-expressive, liberal, upbeat and open to change.
They are more ethnically and racially diverse than older adults. They’re less religious, less likely to have served in the military, and are on track to become the most educated generation in American history.
Their entry into careers and first jobs has been badly set back by the Great Recession, but they are more upbeat than their elders about their own economic futures as well as about the overall state of the nation.
Klint Finley: Please tell us what you mean by “cyborg anthropology” and explain what it is that you do on a day-to-day basis.
Amber Case: Cyborg anthropology is the study of human and non-human interaction, especially tools and networks that are formed by networks of human and non human objects.
My work relates to tracing the history of tool use and how it has affected culture over time.
For instance, one can look at a hammer and notice that over the last 300 years the design and function of the hammer has not changed very much.
The shape and form and function are still similar but when one looks at the first computers, which were large machines running on vacuum tubes, and computers now — one sees that the computer’s overall look and function and shape and size has drastically changed.
So then one must look at the idea of the hammer or knife. An animal must evolve a better tooth, or sharp edge in which to capture and kill prey. If a tooth breaks, or is not sharp enough, or the animal is not fast enough, that animal dies and cannot reproduce.
But the human has externalized the evolution by making a tool outside of their mouths. The knife is an extension of the tooth that can be thrown. The speed and excellence of the knife depends on the worker or the person who has power enough to have a worker who can create that tool.
Once we externalized objects and processes, we externalized evolution.
But the computer is different. Tool use has been physical for most of human evolution. Now we see computers as an interface not to the physical self, but to the mental self.
The mental self is an internal space, which is unseen, and a lot of what we see on a computer is unseen unless we look at it through an interface or portal.
So what I do in cyborg anthropology is consider how people upload their bodies into hyperspace, and how humanness is produced through machines and machines through humanness.
I also consider online presence, cell phones and the technosocial self.
What methodology do you employ? What is a day in the life of a cyborg anthropologist like?
My methodology is mainly qualitative analysis, ethnography and participant observation.
What I mean by that is that I use the anthropological method of ethnography to collect observations through participating in groups of people involved in tool use or digital networks and see how they work, play, communicate, and what their values are.
Part of my work is letting people know that they’ve always been part human and part machine. Donna Haraway talked about everyone being a low tech cyborg. That for some part of every day people are connected to a machine.
In my recent study of Facebook, I’ve combed through user stories and behavior and placed people into general groups of interaction. I’ve also studied how the interface – or the participation architecture of the site influences how people act as people begin to move online – and live out a great deal of their lives there, the shape of an interface really affects how people move so a lot of my day is spent combing through the internet looking at that sort of behavior and jotting it down into a digital field journal of sorts.
The other part is looking for new developments. Things that break the norms. Those are harbingers of new trends and systemic shifts.
What are some of your most interesting recent findings?
Some of my favorite things have been mistakes. For instance, when a middle aged woman thinks that she’s sending a private message to someone she’s been seeing, and in reality she posted on her wall for everyone to see.
Yahoo Answers are amazing. It’s where a lot of very young kids ask each other ridiculous questions – and young kids answer back.
Also, looking at people’s signatures. Not their handwritten ones, but their digital ones. How they compose sentences and where they use capitalization. How they respond to things, etc. It really tells a lot about who they are.
The other thing I like to discover is digital artifacts. There are some digital archeologists and historians who try to keep data alive and in circulation. When one considers it, and Stewart Brand has mentioned this quite a bit… data is very fragile.
When one considers the pyramids and symbols carved into stone, that data is still around today. It’s been thousands of years and we still have it vs. Twitter, where data is regularly dumped and not saved.
One of the problems is that machines don’t get heavier when we put data into them. Which seems strange, because information has weight in real life.
Jason Scott is a great data archivist. he runs textfiles.com. He saves BBS forums and stuff from the 80s that might have been erased over time.
It’s funny that you say that. Sometimes when I delete a lot off stuff from my laptop, I actually feel like my laptop is lighter. I know it isn’t, but it just seems like it is.
It’s interesting how you say that- it’s a sign that your senses are tied to a machine – that your machine has become an external brain of sorts.
The first time my computer crashed I felt I had lost half my brain.
Here is a conversation I had with @strangeways about weight.
@caseorganic: My old computer is being reformatted. I can feel the files being deleted. It’s a strange feeling, like re-writing memories.
@strangeways: I think it is completely possible. I’ve felt it many times before. There’s a transition from physical effects to mental ones.
@strangeways Physical storage came first, then mental storage. I bet mental phantom neuron syndrome will become more prevalent.
@caseorganic Sort of feels like amputation, doesn’t it? I wonder if one can experience phantom limb with a virtual body part.
There was a campaign for Maxtor about data. It becomes increasingly easy to put data into a system, but the data, once in the system, has an escape velocity like a black hole. The computer is beginning to liquefy objects around it, like a black hole. Especially the iPhone – taking physical objects like compasses, games, cameras, notebooks, date books and address books and digitizing them, centralizing them into one device.
What sorts of tools do you find most useful in your work?
I use a lot of TextEdit. I copy and paste things in, label them, and then name the file with descriptive words. That way my computer becomes a search engine for my research.
But the best tool is SKITCH and Flickr. Skitch can take a screenshot and upload it automatically to my Flickr account. It’s my external brain. So I used Skitch and Flickr symbiotically to take a quick screenshot of whatever I’m working on.
A random example from Amber’s Flickr stream
I use Moodle for private notes to myself, and I have some Pbwiki accounts. But Flickr is really the best. It allows sources, timestamps, tagging, and searching. And it allows comments, so my digital journal becomes a living creation.
You don’t have a Phd or other post-graduate degree, is that correct?
I do not have a PhD.
And you work in the private sector as a consultant?
Why did you decide to go into the private sector instead of continuing in academia? Do you think you will ever go back to academia?
I went to the private sector first because I just got out of college. I wrote a thesis on mobile phones and their technosocial sites of interaction. I got a degree in sociology and anthropology.
I was told to work two years in the “real world” before going back to academia, going straight to grad school would leave me at a disadvantage. First, I wouldn’t know what the real world needed, and secondly, I wouldn’t know anything else except for academia.
My favorite conference was MIT’s futures of entertainment, which I spoke at in November 2008. I liked the conference because it was a hybrid event. It brought together people from industry and academia. Industry can beneif a lot from academia, but not from 200 page reports. And academia can benefit a lot from industry, but not from silly marketing statements.
So I wanted both perspectives. Someone has to be able to translate between the two. Its useful, else a lot of miscommunication happens and redundancies occur.
What advice would you give to liberal arts majors looking to make a career outside of academia?
Network. Network a whole lot.
Don’t network in a silly way. Network honestly. Find people who inspire and invigorate you, who make you work on things harder than ever before.
Create an online presence that is ubiquitous and enjoyable to interface with. Let it be known who you want to be. Put that on your business card and on your social profiles.
Be uniform in your focus. Set goals for who you want to meet.
Become a resource for people. Connect them. Have a blog or set of resources that aggregates and disperses useful information in your area of interest.
Attend local conferences. Speak at events. Volunteer at conferences.
Speaking is the easiest way to meet everyone in the room. Volunteering is the easiest way to meet all of the registrants, especially ones you might be too afraid to talk to.
Don’t be afraid to find the smartest person in the room and ask them how they got there.
Fail daily. Fail a whole bunch. Challenge yourself and don’t worry if you have no supporters. Be the first one there.
… That sounds like a promotional book, lol.
And speaking of conferences – you were a founder and organizer of CyborgCamp, and the second one is coming up in a few months. Could you tell us about the impetus of that event?
The idea behind CyborgCamp was to have a forum for the discussion of the past, present, and future. The conference was also livestreamed so that it would be accessible to anyone in the world. It was seen in over 50 countries.
The conference was not really created by me, but by a community that sprang up suddenly on Twitter. Within 3 hours, CyborgCamp had a website, a wiki, a sponsor, and 9 volunteers.
It wasn’t a choice for me. I knew I had to make the conference, and I strove to make it an invigorating experience. I found some great speakers, like Ward Cunningham, inventor of the Wiki.
The unconference part allowed the attendees to discuss what was really on their minds. We discussed everything. From agriculture to technoculture, to insulin pumps, to connectivity and the digital device, to strategy and the future. It was a cocreated event, and it was amazing to be a part of it.
A number of people in Brasil watched the conference and there will be a
CyborgCamp Brasil in May 2010.
The next domestic one will be in Portland in October.
People who have no religion know right from wrong just as well as regular worshippers, according to the study.
The team behind the research found that most religions were similar and had a moral code which helped to organise society.
But people who did not have a religious background still appeared to have intuitive judgments of right and wrong in common with believers, according to the findings, published in the journal Trends in Cognitive Sciences.
Dr Marc Hauser, from Harvard University, one of the co-authors of the research, said that he and his colleagues were interested in the roots of religion and morality.