TagFull Articles

Scientists discover missing link in the emergence of life

inorganic life

Philosophers and scientists have argued about the origins of life from inorganic matter ever since Empedocles (430 B.C.) argued that every thing in the universe is made up of a combination of four eternal ‘elements’ or ‘roots of all’: earth, water, air, and fire, and that all change is explained by the arrangement and rearrangement of these four elements. Now, scientists have discovered that simple peptides can organize into bi-layer membranes. The finding suggests a “missing link” between the pre-biotic Earth’s chemical inventory and the organizational scaffolding essential to life.

Daily Galaxy: Scientists Discover Missing Link Between Organic and Inorganic Life

(Thanks Wade)

George Bush I (1796?1859)

Found this article yesterday while reading the print edition of The New York Times Magazine (link to article). Fortunately, it’s online (for the time being), but I’ll copy n paste it in its entirety for your reading pleasure:-

By TED WIDMER
Published: July 22, 2007

None of us can control our ancestors. Like our children, they have minds of their own and invariably refuse to do our bidding. Presidential ancestors are especially unruly – they are numerous and easily discovered, and they often act in ways unbecoming to the high station of their descendants.

Take George Bush. By whom I mean George Bush (1796-1859), first cousin of the president’s great-great-great-grandfather. It would be hard to find a more unlikely forebear. G.B. No. 1 was not exactly the black sheep of the family, to use a phrase the president likes to apply to himself. In fact, he was extremely distinguished, just not in ways that you might expect. Prof. George Bush was a bona fide New York intellectual: a dabbler in esoteric religions whose opinions were described as, yes, ‘liberal’; a journalist and an academic who was deeply conversant with the traditions of the Middle East.

There was a time when the W-less George Bush was the most prominent member of the family (he is the only Bush who made it into the mid-20th-century Dictionary of American Biography). A bookish child, he read so much that he frightened his parents. Later he entered the ministry, but his taste for arcane controversy shortened his career, and no church could really contain him. Ultimately, he became a specialist at predicting the Second Coming, an unrewarding profession for most, but he thrived on it.

In 1831 he drifted to New York City, just beginning to earn its reputation as a sinkhole of iniquity, and found a job as professor of Hebrew and Oriental languages at what is now New York University. That same year, he published his first book, ‘The Life of Mohammed.’ It was the first American biography of Islam’s founder.

For that reason alone, the book would be noteworthy. But the work is also full of passionate opinions about the prophet and his times. Many of these opinions are negative – as are his comments on all religions. Bush often calls Muhammad ‘the impostor’ and likens him to a successful charlatan who has foisted an ‘arch delusion’ on his fellow believers. But he is no less critical of the ‘disastrous’ state of Christianity in Muhammad’s day. And throughout the book, Bush reveals a passionate knowledge of the Middle East: its geography, its people and its theological intensity, which fit him like a glove. For all his criticism of Muhammad, he returns with fascination to the story of ‘this remarkable man,’ who was ‘irresistibly attractive,’ and the power of his vision.

‘The Life of Mohammed’ went out of print a century ago, and there it was expected to remain, in perpetuity. But in the early 21st century, it was reissued by a tiny publisher simply because of the historical rhyme that a man with the same name occupied the White House. The first George Bush never witnessed the Second Coming, but now his book was enjoying an unexpected afterlife.

Predictably, it enraged some readers in the Middle East, where rage is an abundant commodity. In 2004, Egyptian censors at Cairo’s Al-Azhar Islamic Research Academy denounced the book by President Bush’s ‘grandfather’ as a slander on the prophet, and the State Department was forced to issue a document clarifying the family relationship. That document may have unintentionally fanned the flames when it pointed out that ‘The Life of Mohammed’ never compares Muslims to insects, rats or snakes, though it does, on occasion, liken them to locusts.

The stage was set for conspiracy theories to spread across the Middle East like sandstorms. But then something really strange happened. The same censors read carefully through the book and in 2005 issued an edict that reversed their earlier ruling, admitting that it was O.K. Bush’s theological intensity might kill him with an American audience, but in the Middle East it seems to have allowed him to pass muster. Clearly this passionate religious scholar was no enemy of Islam. You could almost say that he was part of the family.

Perhaps the Egyptians could sense something honorable about this distant life, which dedicated itself to the search for knowledge. After George Bush died, a friend remembered the feeling of walking into his apartment, a third-story walk-up on Nassau Street, ‘a kind of literary Gibraltar,’ where he would find the professor surrounded by his piles of rare and ancient volumes.

It all seems so improbable. George Bush? A bookworm? In a crummy apartment? A mystic might look at this history and find evidence that God is indeed inscrutable. But as the first George Bush knew, religions, like families, contain plentiful contradictions. As the current George Bush has discovered, no place can tease them out like the Holy Land.

Computer analysis provides Incan string theory

Via me, Fell, a pretend-ninja and superstar in my own mind

Oh? and New Scientist?

The mystery surrounding a cryptic string-based communication system used by ancient Incan administrators may at last be unravelling, thanks to computer analysis of hundreds of different knotted bundles.

The discovery provides a tantalising glimpse of bureaucracy in the Andean empire and may, for the first time, also reveal an Incan word written in string.

Woven from cotton, llama or alpaca wool, the mysterious string bundles – known as Khipu – consist of a single strand from which dangle up to thousands of subsidiary strings, each featuring a bewildering array of knots. Of the 600 or so Khipu that have been found, most date from between 1400 AD and 1500 AD. However, a few are thought to be about 1000 years old.

Spanish colonial documents suggest that Khipu were in some way used to keep records and communicate messages. Yet how the cords were used to convey useful information has puzzled generations of experts.

Unpicking the knots

Now, anthropologist Gary Urton and mathematician Carrie Brezine at Harvard University, Massachusetts, US, think they may have begun unravelling the knotty code. The pair built a searchable database containing key information about Khipu strings, such as the number and position of subsidiary strings and the number and position of knots tied in them.

The pair then used this database to search for similarities between 21 Khipus discovered in 1956 at the key Incan administrative base of Puruchuco, near modern day Lima in Peru. Superficial similarities suggested that the Khipu could be connected but the database revealed a crucial mathematical bond – the data represented by subsidiary strands on some of Khipu could be combined to create the strands found on more complex ones.

This suggests the Khipu were used to collate information from different parts of the empire, which stretched for more than 5500 kilometres. Brezine used the mathematical software package Mathematica to scour the database for other mathematical links ? and found several.

First word

“Local accountants would forward information on accomplished tasks upward through the hierarchy, with information at each successive level representing the summation of accounts from the levels below,” Urton says. “This communication was used to record the information deemed most important to the state, which often included accounting and other data related to censuses, finances and the military.”

And Urton and Brezine go a step further. Given that the Puruchuco strings may represent collations of data different regions, they suggest that a characteristic figure-of-eight knot found on all of the 21 Puruchuco strings may represent the place itself. If so, it would be the first word to ever be extracted from an Incan Khipu.

Completely deciphering the Khipu may never be possible, Urton says, but further analysis of the Khipu database might reveal other details of life. New archaeological discoveries could also throw up some more surprises, Urton told New Scientist.

Don’t call it a come back

Daniel Pinchbeck, and the fine folks at FutureHi, are starting a project called Metacine: a Magazine for the New Edge. It’s about stuff like Burning Man and, like Future Hi, “new” psychedelic culture.

It sounds a lot like Mondo 2000, a magazine for the new edge that ran sporadically from the late 80s (under the title Reality Hackers) until around 1997. It had articles about Burning Man, raves, designer drugs, smart drugs, etc. and basically spawned the magazine Wired. Burning Man’s been going for nearly 2 decades now. Nothing new there. All the sustainable bio future stuff they’re talking about on the Metacine web site? Sounds like Mother Earth News or the Whole Earth Catalog.

So what’s “new edge” about all of this? I don’t think there’s anything wrong with any of what they’re doing. I’m excited about all of it, honestly. But trying to package it up as some sort of new movement sounds like journalese to me. I’ve been as guilty as anyone else about this. Just look through the Technoccult archives and you’ll find plenty of evidence.

Why this obsession with doing “new” things? Finding the trends, the edge, blah blah blah blah blah. Seems like we’re all still stuck in the past, rambling about sustainable energy and Leary’s 8 circuit model and all that. But is that really such a bad thing?

Then there’s Jason Louv’s attempt to create a new occult ultraculture. Rather than trying to document a new culture, Jason’s trying to will a new one into existence with his book. I admire what he’s doing, and I know he’s doing it for the right reasons. He wants to see a new generation of socially consciousness occultists. It actually reminds me a lot of Terrence McKenna’s stuff though, about the role of shaman as a healer for the community. McKenna called his vision of the future an “archaic revival,” because everything he expected to occur was actually ancient.

Don’t get me wrong, I have a lot of respect for Jason and for the Future-Hi cats, and I’m sure Pinchbeck has the best intentions. I’ll be pre-ordered Generation Hex and will probably be a Metacine subscriber. But I’m worried that an obsession with novelty and “the next big thing” will only hurt all our long term goals, stunt our personal development by making us trend whores, and blind us to realms of less glamorous possibility.

Biopunk: the biotechnology black market

The word biopunk has been bandied about for some time now. Google already has over 1,000 results for a search on the term. R.U. Sirius wrote a piece in Rolling Stone a couple years ago about the possibility of garage biotechnologists, a movement he called biopunk. But I’d like to throw a new meaning for the concept out there: the near future (already here?) biotechnology black market.

The biotechnology market has already captured the imaginations of the business world. For the past few years it’s been hyped as the next big thing, the new dot-com bubble. For instance, Paul Allen wants to turn a neighborhood in Seattle into a biotech industry fueled urbanist utopia.

Ample private and federal investment is being poured into biotech research, but I expect U.S policies banning cloning research and limiting funding for stem cell research will effectively limit the U.S.’s role in biotechnology development. Less restrictive policies and/or cheaper labor will give Europe, Russia, and Asia advantages in the global biotech industry.

But other factors will drive an underground biotechnology market: the crippling expense of prescription drugs, health insurance, malpractice insurance, and student loan debts.

Chemistry students have been making money manufacturing LSD, MDMA, and other illegal drugs for years. But the demand for black market prescription drug clones could create a new use for the college chemistry lab. Imagine thousands of undergrads manufacturing HIV meds and other expensive drugs for cheap underground resale.

Meanwhile, medical school students, un-licensed doctors, or even licensed doctors trying to keep up with insurance payments will be performing a myriad of unauthorized procedures. Genesis P. Orridge could be at the forefront of a movement again. Sex changes are nothing new, but P. Orridge and Lady Jaye’s sex change as installation art project is on the forefront of the body modification movement, which constantly grows more extreme. Face transplants are about to become a reality. But these black market surgical procedures won’t be limited to weird body art projects. Uninsured Americans will be seeking all types of surgical procedures on the black market, and finding students and doctors to perform them will become increasingly easier.

Of course, those policy restrictions will create another biotech black market: clandestine cloning research labs and illegal human testing projects. Illegal human testing is almost certainly already a reality. And even with recent improvements in the job market, there are still thousands of desperate unemployed people to be taken advantage of.

And let’s not forget R.U. Sirius’s frightening prediction from his Rolling Stone article: garage production of germ weapons.

Rivalino Is in Here: Robotic Revolt and the Future Enslavement of Humanity

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

A Brief History of Artificial Intelligence

In 1941 a new invention that would one day revolutionize virtually every aspect of society was developed. Electronic computers were unveiled in both the United States and Germany. They were large, bulky units that required gargantuan air-conditioned rooms. They were a programmers nightmare, requiring the separate configuration of thousands of wires to get a program to run.

Eight years later, in 1949, the stored program computer was developed, making the task of programming simpler. Advancements in computer theory began the field of computer science and soon thereafter Artificial intelligence.< The invention of this electronic means of processing data created a medium that made man-made intelligence a possibility. And while this new technology made it possible, the link between human intelligence and machine intelligence was not fully observed until the 1950's. One of the first Americans to make the observation on the principles of feedback theory was Nobert Wiener, which was influential to the development of early Artificial intelligence. In 1955 the Logic Theorist was developed by Newell and Simon, considered by many people to be the first functional AI program. The Logic Theorist would attempt to solve problems according to a tree model, selecting the branch which would most likely result in a correct answer. It was a stepping stone in the development of the AI field. A year later John McCarthy, who has come to be regarded as the father of AI, organized a gathering in Vermont which became known as the Dartmouth Conference. From that point on the field of study became known as Artificial intelligence. And while the conference in itself was not an overall success, it did bring the founders of AI together and laid the foundations of future AI research. AI began to pick up momentum in the years following. While the field remained undefined, ideas were re-examined and built at AI research centers at Carnegie Mellon and MIT. New challenges were found and studied, including research on systems that could efficiently problem-solve by a limiting search, similar to the Logic Theorist. Another challenge was making a system that could learn by itself. In 1957 the General Problem Solver (GPS) was first tested. The program was developed by Newell and Simon, who had earlier success with the Logic Theorist. As an extension of Wiener's feedback principle the GPS was capable of solving common sense problems to a far greater extent than the predecessor programs. A year later John McCarthy announced his new creation to the world - The LISP language (short for LISt Processing). It was adopted as the language of choice among most AI developers and remains in use to this day. MIT received a 2.2 million dollar grant from the US Department of Defense's Advanced research projects Agency (ARPA) to fund experiments involving AI. The grant was made to ensure that the US could stay ahead of the Soviet Union in technological advancements and serve to increase the pace of development in AI by drawing computer scientists from around the world. SHRDLU was written by Terry Winograd at the MIT Artificial Intelligence Laboratory in 1968-1970. It carried on a simple dialog with a user, via a teletype, about a small world of objects (the BLOCKS world) shown on an early display screen. Winograd's dissertation, issued as MIT AI Technical Report 235, Feb. 1971 with the title Procedures as a Representation for Data in the Computer Program for Understanding Natural Language, describes SHRDLU in greater detail. Other programs which were developed in this period include STUDENT, an algebra solver, and SIR, which understood simple English sentences. These programs helped refine language comprehension and logic in AI programs. The development of the expert system, which predict the probability of a solution under set conditions, aided in the advancement of AI research. During the 1970's new methods for testing AI programs were utilized, notably the Minsky frames theory. David Marr proposed new theories about machine vision and the PROLOGUE language was developed during this time. As the 1980's came to pass, AI was moving at an even faster pace and making it's way into the corporate sector. Since IBM had contracted a research team in the years following the release of GPS, it was only logical that a continued expansion into the corporate world would eventually happen. In 1986 US saled of AI-related hardware and software reached $425 million. Companies the likes of Digital Electronics were using the XCON, an expert system designed to program the large VAX computer systems. DuPont, General Motors, and Boeing utilized expert systems heavily. Teknowledge and Intellicorp formed, helping fill the demand for expert systems by specializing in creating software specifically to aid in the production of expert systems. It was in the years following this boom that computers were first beginning to seep into private use, outside the laboratory settings. The personal computer made it's debut in this period. Fuzzy logic, pioneered in the US, had the unique ability to make decisions under uncertain conditions. New technology was being developed in Japan during this period which aided the development of AI research. Neural networks were being considered as a possible means of achieving Artificial intelligence. The military put AI based hardware to vigorous testing during the war with Iraq. AI-based technology was used in missile systems, heads-up-displays and various other technologies. AI began to make the transition into the home during this period, with the popularity of AI computers growing. Applications such as voice and character recognition were made available to the public. Artificial Intelligence has and will continue to affect our lives. Do Intelligent Machines Dream of Global Conquest?

While beneficial in the past, can we be so sure that this impact will remain positive for us in the future, as AI becomes more sophisticated?

Recently Stephen Hawkings, the renowned physicist, warned that if humans hope to compete with the rising tide of Artificial intelligence they will have to improve themselves through genetic engineering. Which seems amusing, at first, but there are several who agree with Hawkings observations.

Intelligent machines could replace the need for menial labor on our parts while massively increasing production. They could overwhelm us with all forms of intellectual problems, artistic pursuits and new spiritual debate. This seems well and good, of course. There are many who would welcome such an advancement in that scenario.

However, the danger alluded to by Hawkings is that these intelligent machines could run amok, enslaving or attempting to replace humanity.

A Brief History of Genetic Engineering

It was in the Neolithic age that people began to save the seeds of the best specimens for the next planting, the domestication and breeding of animals, and the use of bacteria in the fermentation of food and beverages. The Neolithic Age, in many respects, is the beginning of genetic engineering as we know it.

In 1866 a Czech monk studies peas through several generations and made his postulations on the inheritance of biological characteristics in the species. His name is Gregor Mendel and while his ideas are revolutionary, they are not widely appreciated for some four decades after they publication. It is in 1903 that the American biologist William Sutton proposes genes are located on chromosomes, which have been identified through a microscope.

Eight years later Danish biologist William Johanssen devises the term “gene” and distinguishes genotypes (genetic composition) from phenotypes (open to influence from the environment). Biologist Charles B. Davenport, head of the US Eugenics Record Office in NY, publishes a book advising eugenic practices, based on evidence that undesirable characteristics such as “pauperism” and “shiftlessness” are inherited traits. The eugenics movement becomes popular in the US and Northern Europe over the next three decades, until Nazism dawns and the effects of a fully functional eugenics program as seen for the first time.

In 1922 the American geneticist Thomas H. Morgan and his colleagues devise a technique to map genes and prepare to make a gene map of the fruit fly chromosomes. 22 years later Oswald Avery and colleagues at the Rockefeller Institute are about to demonstrate that genes are composed of deoxyribonucleic acid (DNA). During the same time Erwin Schrodinger publishes the classic “What is Life?” which ponders the complexities of biology and suggests that chemical reactions don’t tell the entire story.

In 1953 Francis Crick and James Watson, working at the Molecular Biology Laboratory at Cambridge, explain the double-helix structure of DNA. In 1971 Stanley Cohen of Stanford University and Herbert Boyer of the University of California in San Francisco develop the initial techniques for recombinant-DNA technologies. They publish the paper in 1973, and apply for a patent on the technologies a year later. Boyer goes on to become a co-founder in Genentech, Inc., which becomes the first firm to exploit rDNA technologies by making recombinant insulin.

In 1980 the US Supreme Court rules that recombinant microorganisms can be patented in the ground-breaking Diamond vs. Chakrabarty case, which involved a bacterium that is engineered to break down the components of oil. The microorganism is never used to clean up oil spills over concern over it’s uncontrollable release into the environment. In the same year the first Genentech public stock offering sets a Wall Street record.

A year later the first monoclonal antibody diagnostic kits are approved for sale in America. The first automatic gene synthesizer is also marketed. In 1982 the first rDNA animal vaccine is approved for use in Europe while the first rDNA pharmaceutical product, insulin, is approved for use in the United States. This same year the first successful cross-species transfer of a gene occurs when a human growth gene is inserted into a lab mouse and the first transgenic planet is grown.

In 1985 we see the first environmental release of genetically engineered microorganisms in the United States, despite controversy and heated debate over the issue. The so-called ice-minus bacteria is intended to protect crops from frost. In the same year the US declares that genetically engineered plants may be patented.

Transgenic pigs are produced in 1986 by inserting human growth hormone genes into pig embryos. The US Department of Agriculture experiment in Beltsville, Md., produces deformed and arthritic pigs. Two die before maturity and a third is never able to stand up.

In 1988 the first genetically engineered organism is approved for sale in Australia. Oncomouse, a mouse that was engineered to develop breast cancer by scientists at Harvard University with funding from DuPont, obtains a U.S. patent but is never patented in Europe. Many other types of transgenic mice are soon created. The Human Genome Project begins later in the year, whilst a German court stops the Hoechst pharmaceutical company from producing genetically engineered insulin after public protest over the issue.

In the 1990’s it is Cary Mullis’s discovery of PCR and the development of automated sequencers that greatly enhances research of genetics, becoming the warp drive for the age of molecular biology. Bioinformatics, proteomics and the attempts at developing a mathematics (and computers capable) of determining protein folding will forever revolutionize the discovery of drugs and the development of novel proteins. New techniques like real time PCR and micro arrays can speak volumes of the level of genetic expression within a cell. Massive computers are being used to predict correlations between genotype and phenotype and the interaction between genes and environment.

These recent developments in molecular genetics can, if used properly, marshall in a new age of evolution: one aided by genotyping and understanding what phenotypes these correspond to.

The Protest Against Genetic Modification

The argument against what could easily have been deemed “mad science” just decades ago is that genetically modified foods are unsafe for consumption as we do not yet know the long-term effects they will have on us or our ecosystem. From transgenic crops to animals, a growing opposition force has demanded that there be protections for citizens who have no desire to consume these unnatural products. The term biospiracy has been conjured up to distinctly brand conspiracies involving genetic engineering.

Eight multinationals under heavily scrutiny by protesters are Dow, Du Pont, Monsanto, Imperial Chemical Industries, Novartis, Rhone Poulenc, Bayer and Hoechst. The claim is that these companies are funding genetic experiments aimed at engineering food seeds which would allow food supplies growing on farmland to accept higher doses of herbicides without dying. The fear is that this practice will load the soil and our bodies with toxic chemicals, all for the profit of megacorporations.

And since this article is going to explain how robots will take over the world if we don’t genetically enhance ourselves, it would be most appropriate that I end this portion of the debate and go off into a rant about the dangers on NOT using genetic modification technologies.

Hoo-Mun Versus Mechanoid

We’ve seen films such as the Terminator portray a future in which intelligent machines have humans on the run. Some fear that this fantastic seeming concept could eventually become a reality.

Computers have, on average, been doubling their performance every 18 months. Our intellect has thus far been unable to keep up with such a staggering rate of development, and as such there is a possibility that the computers could develop an intelligence which would prove dangerous to our human civilization.

The protests against the genetic modification revolution which has begun to take place slow the progress of this research, sometimes grinding experiments to a halt. Be it for spiritual, for safety or even questions about ethics, these protests are managing to stall and delay the development of practical and safe means by which we can advance our own minds and bodies to cope with new environments and new threats to our safety.

Inorganic technology, on the other hand, is embraced with very little question. From cell phones to personal computers, we see these technologies proliferating at an extraordinary rate. The creation of the Internet has allowed this technology to flourish even more so, while also allowing protesters to link together, allowing them to co-ordinate their efforts to stop genetic engineering from moving forward at the same pace as other technologies.

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

Then again, that’s just whacko.

However, if there’s even the remotest possibility, you can bet…

Rivalino will be in there.

© 2024 Technoccult

Theme by Anders NorénUp ↑