Tagcognitive science

N-Back Training Exercise Still Holding Up in Tests

soakyourhead screenshot
Above: the Soak Your Head Dual N-Back Application

I’ve covered research on how most brain training exercises don’t actually hold-up in tests. The good news is that dual n-back training, also covered here previously, is continuing to hold up in tests:

Jonides, who is the Daniel J. Weintraub Collegiate Professor of Psychology and Neuroscience, collaborated with colleagues at U-M, the University of Bern and the University of Tapei on a series of studies with more than 200 young adults and children, demonstrating the effects of various kinds of n-back mental training exercises. The research was supported by the National Science Foundation and by the Office of Naval Research.

According to Jonides, the n-back task taps into a crucial brain function known as working memory—the ability to maintain information in an active, easily retrieved state, especially under conditions of distraction or interference. Working memory goes beyond mere storage to include processing information.

Medical Express: A Brain Training Exercise That Really Does Work

(Thanks Bill!)

Soak Your Head offers a free Web-based n-back training program, but it requires Microsoft Silverlight. You can find a list of other applications here.

Another way to boost your mental capabilities? Play first person shooters. This NPR story provides an overview of the research. You can also find a research paper that looks at multiple studies here (PDF).

The best way to stave off cognitive decline, however, may be to spend time socializing with friends.

Bees Can Solve the “‘Travelling Salesman Problem”

bees-complex-math

What’s interesting is that this doesn’t seem to be a result of “swarm intelligence” – individual bees can somehow make these calculations:

Scientists at Queen Mary, University of London and Royal Holloway, University of London have discovered that bees learn to fly the shortest possible route between flowers even if they discover the flowers in a different order. Bees are effectively solving the ‘Travelling Salesman Problem’, and these are the first animals found to do this.

The Travelling Salesman must find the shortest route that allows him to visit all locations on his route. Computers solve it by comparing the length of all possible routes and choosing the shortest. However, bees solve it without computer assistance using a brain the size of grass seed. […]

Co-author and Queen Mary colleague, Dr. Mathieu Lihoreau adds: “There is a common perception that smaller brains constrain animals to be simple reflex machines. But our work with bees shows advanced cognitive capacities with very limited neuron numbers. There is an urgent need to understand the neuronal hardware underpinning animal intelligence, and relatively simple nervous systems such as those of insects make this mystery more tractable.”

PhysOrg: – Bumblebees can find the solution to a complex mathematical problem which keeps computers busy for days

(via Fadereu)

A Grand Unified Theory of Artificial Intelligence

the thinker

Early AI researchers saw thinking as logical inference: if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly. One of AI’s first projects was the development of a mathematical language — much like a computer language — in which researchers could encode assertions like “birds can fly” and “waxwings are birds.” If the language was rigorous enough, computer algorithms would be able to comb through assertions written in it and calculate all the logically valid inferences. Once they’d developed such languages, AI researchers started using them to encode lots of commonsense assertions, which they stored in huge databases.

The problem with this approach is, roughly speaking, that not all birds can fly. And among birds that can’t fly, there’s a distinction between a robin in a cage and a robin with a broken wing, and another distinction between any kind of robin and a penguin. The mathematical languages that the early AI researchers developed were flexible enough to represent such conceptual distinctions, but writing down all the distinctions necessary for even the most rudimentary cognitive tasks proved much harder than anticipated.

Embracing uncertainty

In probabilistic AI, by contrast, a computer is fed lots of examples of something — like pictures of birds — and is left to infer, on its own, what those examples have in common. This approach works fairly well with concrete concepts like “bird,” but it has trouble with more abstract concepts — for example, flight, a capacity shared by birds, helicopters, kites and superheroes. You could show a probabilistic system lots of pictures of things in flight, but even if it figured out what they all had in common, it would be very likely to misidentify clouds, or the sun, or the antennas on top of buildings as instances of flight. And even flight is a concrete concept compared to, say, “grammar,” or “motherhood.”

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

PhysOrg: A Grand Unified Theory of Artificial Intelligence

(Thanks Josh!)

© 2021 Technoccult

Theme by Anders NorénUp ↑