Tagoil

The Mad Max Future Already Happened

Fun post from steelweaver about how Mad Max was inspired by the 1976 oil crisis and has some unsettling parallels with our current situation:

as I remember it, the setting for the first movie in the Mad Max series is a world where oil scarcity has led to economic disaster and the beginning of the breakdown of social order; where, whilst the police and justice systems continue to function, governmental cutbacks have diminished their ability to effectively maintain control; and where, whilst small pockets of civil society remain relatively unchanged (Max lives in a comfortable suburb with his wife and child), increasingly large areas are plagued by criminal gangs of looters.

Just saying…

In fact, the three-movie arc of the Mad Max films is in many ways a beautifully realised totally ridiculous, but excellently costumed, account of the slow breakdown of order (I), followed by total chaos (Road Warrior), followed by the first stages of re-establishing technology, trade and culture (Thunderdrome).

steelweaver: A Mad Max future

(Via Brainsturbator)

The BP Spill and Why It’s Worse Than We All Think

Update: Another reason that it’s so incredibly horrible: The vast majority of the Exxon Valdez cleaners are now dead, with an average life expectancy of 51 years. (Thanks to Wade for the reminder)

My wife Jililan explains why the BP oil spill is worse than we think:

1. THEY KNEW IT WAS GOING TO HAPPEN.
2. THEY ARE NOT “ONE BAD APPLE”
3. WE WILL CONTINUE TO HEAR EMPTY PROMISES OF GREENING UP OUR ENERGY PROBLEMS, AND YET WASHINGTON WILL CONTINUE TO DO NOTHING.
4. BOYCOTTING ONE GAS COMPANY DOESN’T DO A THING.
5. EVEN IF WE DRIVE LESS, OUR LIVES ARE INEXTRICABLY LINKED TO PETROLEUM.
6. BP IS DUMPING MORE TOXIC WASTE INTO THE GULF AS PART OF THEIR “CLEANUP”
7. THE GOVERNMENT HAS DONE EVERYTHING IT CAN TO MAKE SURE BP COMES OUT ON TOP OF ALL OF THIS.
8. SO… HOW MUCH OIL IS REALLY GUSHING INTO THE GULF OF MEXICO?

Each point is expanded upon, with references.

Prime Surrealestate: The BP Spill and Why It’s Worse Than We All Think

See also: Some Oil Spill Related Articles Worth Your Attention

Some Oil Spill Related Articles Worth Your Attention

Burning pipeline Lagos

I know oil spill coverage is everywhere, but here are a few articles that are worthy of your attention:

Greg Palast: BP’s OTHER Spill:

With the Gulf Coast dying of oil poisoning, there’s no space in the press for British Petroleum’s latest spill, just this week: over 100,000 gallons, at its Alaska pipeline operation. A hundred thousand used to be a lot. Still is.

On Tuesday, Pump Station 9, at Delta Junction on the 800-mile pipeline, busted. Thousands of barrels began spewing an explosive cocktail of hydrocarbons after “procedures weren’t properly implemented” by BP operators, say state inspectors. “Procedures weren’t properly implemented” is, it seems, BP’s company motto.

Few Americans know that BP owns the controlling stake in the trans-Alaska pipeline; but, unlike with the Deepwater Horizon, BP keeps its Limey name off the Big Pipe.

There’s another reason to keep their name off the Pipe: their management of the pipe stinks. It’s corroded, it’s undermanned and “basic maintenance” is a term BP never heard of.

How does BP get away with it? The same way the Godfather got away with it: bad things happen to folks who blow the whistle. BP has a habit of hunting down and destroying the careers of those who warn of pipeline problems.

(Thanks Bill!)

Think that’s bad?

More oil is spilled in the Niger every year than has been spilled so far in the Gulf:

Forest and farmland were now covered in a sheen of greasy oil. Drinking wells were polluted and people were distraught. No one knew how much oil had leaked. “We lost our nets, huts and fishing pots,” said Chief Promise, village leader of Otuegwe and our guide. “This is where we fished and farmed. We have lost our forest. We told Shell of the spill within days, but they did nothing for six months.”

That was the Niger delta a few years ago, where, according to Nigerian academics, writers and environment groups, oil companies have acted with such impunity and recklessness that much of the region has been devastated by leaks.

In fact, more oil is spilled from the delta’s network of terminals, pipes, pumping stations and oil platforms every year than has been lost in the Gulf of Mexico, the site of a major ecological catastrophe caused by oil that has poured from a leak triggered by the explosion that wrecked BP’s Deepwater Horizon rig last month.

That disaster, which claimed the lives of 11 rig workers, has made headlines round the world. By contrast, little information has emerged about the damage inflicted on the Niger delta. Yet the destruction there provides us with a far more accurate picture of the price we have to pay for drilling oil today.

(Thanks Marshall!)

And I haven’t even read this yet, but ProPublica has a long and damning investigation on BP.

And just in case you’re not furious enough yet, BP is spending $10,000 a day on Google ads to spin the disaster.

New process turns anything into oil

Sounds too good to be true:

Unlike other solid-to-liquid-fuel processes such as cornstarch into ethanol, this one will accept almost any carbon-based feedstock. If a 175-pound man fell into one end , he would come out the other end as 38 pounds of oil, 7 pounds of gas, and 7 pounds of minerals, as well as 123 pounds of sterilized water. While no one plans to put people into a thermal depolymerization machine, an intimate human creation could become a prime feedstock. “There is no reason why we can’t turn sewage, including human excrement, into a glorious oil,” says engineer Terry Adams, a project consultant. So the city of Philadelphia is in discussion with Changing World Technologies to begin doing exactly that.

Mindfully.org: Anything Into Oil

(via Sauceruney).

GM previews hydrogen-car

From Wired:

Even if Bush’s hydrogen-car initiative is a cynical ploy, even if the Big Three are hiding behind hydrogen promises to prolong the reign of the V-8 and oilmen secretly want to strangle the fuel cell in its cradle, simple geology is carrying us toward a post-gasoline future. Petroleum’s days are numbered. GM executives themselves understand that. Some say the oil will last 20 more years and some say 50, but nobody says forever.

Wired: GM’s Billion-Dollar Bet

Rivalino Is in Here: Robotic Revolt and the Future Enslavement of Humanity

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

A Brief History of Artificial Intelligence

In 1941 a new invention that would one day revolutionize virtually every aspect of society was developed. Electronic computers were unveiled in both the United States and Germany. They were large, bulky units that required gargantuan air-conditioned rooms. They were a programmers nightmare, requiring the separate configuration of thousands of wires to get a program to run.

Eight years later, in 1949, the stored program computer was developed, making the task of programming simpler. Advancements in computer theory began the field of computer science and soon thereafter Artificial intelligence.< The invention of this electronic means of processing data created a medium that made man-made intelligence a possibility. And while this new technology made it possible, the link between human intelligence and machine intelligence was not fully observed until the 1950's. One of the first Americans to make the observation on the principles of feedback theory was Nobert Wiener, which was influential to the development of early Artificial intelligence. In 1955 the Logic Theorist was developed by Newell and Simon, considered by many people to be the first functional AI program. The Logic Theorist would attempt to solve problems according to a tree model, selecting the branch which would most likely result in a correct answer. It was a stepping stone in the development of the AI field. A year later John McCarthy, who has come to be regarded as the father of AI, organized a gathering in Vermont which became known as the Dartmouth Conference. From that point on the field of study became known as Artificial intelligence. And while the conference in itself was not an overall success, it did bring the founders of AI together and laid the foundations of future AI research. AI began to pick up momentum in the years following. While the field remained undefined, ideas were re-examined and built at AI research centers at Carnegie Mellon and MIT. New challenges were found and studied, including research on systems that could efficiently problem-solve by a limiting search, similar to the Logic Theorist. Another challenge was making a system that could learn by itself. In 1957 the General Problem Solver (GPS) was first tested. The program was developed by Newell and Simon, who had earlier success with the Logic Theorist. As an extension of Wiener's feedback principle the GPS was capable of solving common sense problems to a far greater extent than the predecessor programs. A year later John McCarthy announced his new creation to the world - The LISP language (short for LISt Processing). It was adopted as the language of choice among most AI developers and remains in use to this day. MIT received a 2.2 million dollar grant from the US Department of Defense's Advanced research projects Agency (ARPA) to fund experiments involving AI. The grant was made to ensure that the US could stay ahead of the Soviet Union in technological advancements and serve to increase the pace of development in AI by drawing computer scientists from around the world. SHRDLU was written by Terry Winograd at the MIT Artificial Intelligence Laboratory in 1968-1970. It carried on a simple dialog with a user, via a teletype, about a small world of objects (the BLOCKS world) shown on an early display screen. Winograd's dissertation, issued as MIT AI Technical Report 235, Feb. 1971 with the title Procedures as a Representation for Data in the Computer Program for Understanding Natural Language, describes SHRDLU in greater detail. Other programs which were developed in this period include STUDENT, an algebra solver, and SIR, which understood simple English sentences. These programs helped refine language comprehension and logic in AI programs. The development of the expert system, which predict the probability of a solution under set conditions, aided in the advancement of AI research. During the 1970's new methods for testing AI programs were utilized, notably the Minsky frames theory. David Marr proposed new theories about machine vision and the PROLOGUE language was developed during this time. As the 1980's came to pass, AI was moving at an even faster pace and making it's way into the corporate sector. Since IBM had contracted a research team in the years following the release of GPS, it was only logical that a continued expansion into the corporate world would eventually happen. In 1986 US saled of AI-related hardware and software reached $425 million. Companies the likes of Digital Electronics were using the XCON, an expert system designed to program the large VAX computer systems. DuPont, General Motors, and Boeing utilized expert systems heavily. Teknowledge and Intellicorp formed, helping fill the demand for expert systems by specializing in creating software specifically to aid in the production of expert systems. It was in the years following this boom that computers were first beginning to seep into private use, outside the laboratory settings. The personal computer made it's debut in this period. Fuzzy logic, pioneered in the US, had the unique ability to make decisions under uncertain conditions. New technology was being developed in Japan during this period which aided the development of AI research. Neural networks were being considered as a possible means of achieving Artificial intelligence. The military put AI based hardware to vigorous testing during the war with Iraq. AI-based technology was used in missile systems, heads-up-displays and various other technologies. AI began to make the transition into the home during this period, with the popularity of AI computers growing. Applications such as voice and character recognition were made available to the public. Artificial Intelligence has and will continue to affect our lives. Do Intelligent Machines Dream of Global Conquest?

While beneficial in the past, can we be so sure that this impact will remain positive for us in the future, as AI becomes more sophisticated?

Recently Stephen Hawkings, the renowned physicist, warned that if humans hope to compete with the rising tide of Artificial intelligence they will have to improve themselves through genetic engineering. Which seems amusing, at first, but there are several who agree with Hawkings observations.

Intelligent machines could replace the need for menial labor on our parts while massively increasing production. They could overwhelm us with all forms of intellectual problems, artistic pursuits and new spiritual debate. This seems well and good, of course. There are many who would welcome such an advancement in that scenario.

However, the danger alluded to by Hawkings is that these intelligent machines could run amok, enslaving or attempting to replace humanity.

A Brief History of Genetic Engineering

It was in the Neolithic age that people began to save the seeds of the best specimens for the next planting, the domestication and breeding of animals, and the use of bacteria in the fermentation of food and beverages. The Neolithic Age, in many respects, is the beginning of genetic engineering as we know it.

In 1866 a Czech monk studies peas through several generations and made his postulations on the inheritance of biological characteristics in the species. His name is Gregor Mendel and while his ideas are revolutionary, they are not widely appreciated for some four decades after they publication. It is in 1903 that the American biologist William Sutton proposes genes are located on chromosomes, which have been identified through a microscope.

Eight years later Danish biologist William Johanssen devises the term “gene” and distinguishes genotypes (genetic composition) from phenotypes (open to influence from the environment). Biologist Charles B. Davenport, head of the US Eugenics Record Office in NY, publishes a book advising eugenic practices, based on evidence that undesirable characteristics such as “pauperism” and “shiftlessness” are inherited traits. The eugenics movement becomes popular in the US and Northern Europe over the next three decades, until Nazism dawns and the effects of a fully functional eugenics program as seen for the first time.

In 1922 the American geneticist Thomas H. Morgan and his colleagues devise a technique to map genes and prepare to make a gene map of the fruit fly chromosomes. 22 years later Oswald Avery and colleagues at the Rockefeller Institute are about to demonstrate that genes are composed of deoxyribonucleic acid (DNA). During the same time Erwin Schrodinger publishes the classic “What is Life?” which ponders the complexities of biology and suggests that chemical reactions don’t tell the entire story.

In 1953 Francis Crick and James Watson, working at the Molecular Biology Laboratory at Cambridge, explain the double-helix structure of DNA. In 1971 Stanley Cohen of Stanford University and Herbert Boyer of the University of California in San Francisco develop the initial techniques for recombinant-DNA technologies. They publish the paper in 1973, and apply for a patent on the technologies a year later. Boyer goes on to become a co-founder in Genentech, Inc., which becomes the first firm to exploit rDNA technologies by making recombinant insulin.

In 1980 the US Supreme Court rules that recombinant microorganisms can be patented in the ground-breaking Diamond vs. Chakrabarty case, which involved a bacterium that is engineered to break down the components of oil. The microorganism is never used to clean up oil spills over concern over it’s uncontrollable release into the environment. In the same year the first Genentech public stock offering sets a Wall Street record.

A year later the first monoclonal antibody diagnostic kits are approved for sale in America. The first automatic gene synthesizer is also marketed. In 1982 the first rDNA animal vaccine is approved for use in Europe while the first rDNA pharmaceutical product, insulin, is approved for use in the United States. This same year the first successful cross-species transfer of a gene occurs when a human growth gene is inserted into a lab mouse and the first transgenic planet is grown.

In 1985 we see the first environmental release of genetically engineered microorganisms in the United States, despite controversy and heated debate over the issue. The so-called ice-minus bacteria is intended to protect crops from frost. In the same year the US declares that genetically engineered plants may be patented.

Transgenic pigs are produced in 1986 by inserting human growth hormone genes into pig embryos. The US Department of Agriculture experiment in Beltsville, Md., produces deformed and arthritic pigs. Two die before maturity and a third is never able to stand up.

In 1988 the first genetically engineered organism is approved for sale in Australia. Oncomouse, a mouse that was engineered to develop breast cancer by scientists at Harvard University with funding from DuPont, obtains a U.S. patent but is never patented in Europe. Many other types of transgenic mice are soon created. The Human Genome Project begins later in the year, whilst a German court stops the Hoechst pharmaceutical company from producing genetically engineered insulin after public protest over the issue.

In the 1990’s it is Cary Mullis’s discovery of PCR and the development of automated sequencers that greatly enhances research of genetics, becoming the warp drive for the age of molecular biology. Bioinformatics, proteomics and the attempts at developing a mathematics (and computers capable) of determining protein folding will forever revolutionize the discovery of drugs and the development of novel proteins. New techniques like real time PCR and micro arrays can speak volumes of the level of genetic expression within a cell. Massive computers are being used to predict correlations between genotype and phenotype and the interaction between genes and environment.

These recent developments in molecular genetics can, if used properly, marshall in a new age of evolution: one aided by genotyping and understanding what phenotypes these correspond to.

The Protest Against Genetic Modification

The argument against what could easily have been deemed “mad science” just decades ago is that genetically modified foods are unsafe for consumption as we do not yet know the long-term effects they will have on us or our ecosystem. From transgenic crops to animals, a growing opposition force has demanded that there be protections for citizens who have no desire to consume these unnatural products. The term biospiracy has been conjured up to distinctly brand conspiracies involving genetic engineering.

Eight multinationals under heavily scrutiny by protesters are Dow, Du Pont, Monsanto, Imperial Chemical Industries, Novartis, Rhone Poulenc, Bayer and Hoechst. The claim is that these companies are funding genetic experiments aimed at engineering food seeds which would allow food supplies growing on farmland to accept higher doses of herbicides without dying. The fear is that this practice will load the soil and our bodies with toxic chemicals, all for the profit of megacorporations.

And since this article is going to explain how robots will take over the world if we don’t genetically enhance ourselves, it would be most appropriate that I end this portion of the debate and go off into a rant about the dangers on NOT using genetic modification technologies.

Hoo-Mun Versus Mechanoid

We’ve seen films such as the Terminator portray a future in which intelligent machines have humans on the run. Some fear that this fantastic seeming concept could eventually become a reality.

Computers have, on average, been doubling their performance every 18 months. Our intellect has thus far been unable to keep up with such a staggering rate of development, and as such there is a possibility that the computers could develop an intelligence which would prove dangerous to our human civilization.

The protests against the genetic modification revolution which has begun to take place slow the progress of this research, sometimes grinding experiments to a halt. Be it for spiritual, for safety or even questions about ethics, these protests are managing to stall and delay the development of practical and safe means by which we can advance our own minds and bodies to cope with new environments and new threats to our safety.

Inorganic technology, on the other hand, is embraced with very little question. From cell phones to personal computers, we see these technologies proliferating at an extraordinary rate. The creation of the Internet has allowed this technology to flourish even more so, while also allowing protesters to link together, allowing them to co-ordinate their efforts to stop genetic engineering from moving forward at the same pace as other technologies.

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

Then again, that’s just whacko.

However, if there’s even the remotest possibility, you can bet…

Rivalino will be in there.

© 2024 Technoccult

Theme by Anders NorénUp ↑