Tagaugmented reality

Augmented Reality diving application

augmented reality diving

The Fraunhofer Institute for Applied Information Technology FIT just presented an Augmented Reality system for use under water. A diver’s mask with a special display lets the diver see his or her real submarine surroundings overlaid with computer-generated virtual scenes.

In the pilot application, an AR game, the player sees a coral reef with shoals, mussels and weeds, instead of a plain indoor pool. Applications for professional divers are being investigated.

Science Daily: Augmented Reality Under Water

(via ???)

Augmented reality for the blind

LookTel- augmented reality for the blind

Now here’s a ray of sunshine on a cloudy day. LookTel is an object identifier – you point it at something and it tells you what it is. You can teach it to recognize new objects and by aiming it at a product, the program can tell what it is using real speech and when you need to ID something on the fly, you can stick on an image sticker and read that sticker. It’s more or less a barcode and QR scanner with some image recognition thrown in, but it really could be a boon to those with failing – or failed – eyesight.

The system needs a little more computing power than is available in the average smartphone so you need a local PC to help ID some things.

CrunchGear: LookTel, an app for the blind

(Via Augmented Timessee also their previous post on AR and blindness)

Create your own augmented reality maps – Layar tutorial


Do you want to make your own layer? This tutorial tells you how to do it! These are the requirements to create your own layer:

Webserver with PHP and JSON support
MySQL database with phpMyAdmin
For testing: Layar installation on your iPhone 3GS or Android based phone (with GPS and compass)

Stedelijk Museum: Creating a Layar layer: a step by step tutorial

(via Bruce Sterling)

Skinput turns your arm into a touchscreen


In Skinput, a keyboard, menu, or other graphics are beamed onto a user’s palm and forearm from a pico projector embedded in an armband. An acoustic detector in the armband then determines which part of the display is activated by the user’s touch. As the researchers explain, variations in bone density, size, and mass, as well as filtering effects from soft tissues and joints, mean different skin locations are acoustically distinct. Their software matches sound frequencies to specific skin locations, allowing the system to determine which “skin button” the user pressed.

Read More –PhysOrg: Skinput turns your arm into a touchscreen

(via Edge of Tomorrow)

Recognizr: face recognition software for mobile phones

Last July TAT (“The Astonishing Tribe“) posted a concept video of their augmented social face-card system (okay, I made that term up, what else should we call it?). The video tickled the imagination with over 400,000 views.

TAT has since teamed up with Polar Rose, a leading computer vision services company, to turn that concept into a reality. The TAT Cascades system combined with Polar Rose’s FaceLib gives us this prototype called Recognizr.

Read More – Games Alfresco: Your Face Is A Social Business Card

(via Bruce Sterling)

Augmented reality tattoos

augmented reality tattoo

Not much info about this:

This ThinkAnApp augmented reality tattoo looks like a plain black box, but when placed in front of a webcam, a winged dragon emerges.

Video and more pictures as TrendHunter

(via Chris Arkenberg)

Futurist Chris Arkenberg interviewed by Technoccult

chris arkenberg

Chris Arkenberg is a visiting researcher at the Institute for the Future, an organizer of the event AR DevCamp, a musician operating under the name n8ur, and a big picture thinker. I talked to him via instant message about forecasting, how to navigate the future, and more. You can find him on Twitter here and his web site is here.

Klint Finley: You’re a visiting researcher at the Institute for the Future, and you’re working on their Ten Year Forecast. Can you explain what the Ten Year Forecast is, and what your own day to day role in it is?

Chris Arkenberg: The Ten Year Forecast is an annual research arc that looks at global issues impacting the next decade. We develop major forecasts then break each of those out into different scenarios to give organizations models for anticipating the future and adjusting their strategy accordingly. My role is providing research and forecasts for the Global Power and Carbon Economy arcs.

In Carbon, I’ve been profiling global energy dispositions. Eg, “What natural resources does China have under its lands and what is the spread of it’s energy use?” In Global Power, I’ve been analyzing insurgency movements, notably the narcoinsurgency in Mexico, the MEND movement in Nigeria, and the nexus of terrorism, insurgency, and international drug trafficking in Northern Africa.

I noticed you mentioned The Pirate Bay as a global power the other day as well.

Well, Pirate Bay is interesting as an enclave of free information. And they kicked their game up with the recent release of their anonymizing service, effectively acting as an encrypted traffic node. As such, they certainly represent a challenge to traditional systems of control.

Let’s go back a moment. How exactly does forecasting work? What’s the process like?

To begin with, I’d like to just underline that forecasting and prediction are very different. As futurists, we’re not making predictions but, rather, making approximations based on existing trends. I like to think of it as collapsing probability space into the most likely futures.

So having said that, there are many forecasting methodologies but most of them begin with scanning. This is a process of tracking information flows to get signals around your domain. Signals are essentially any event within the domain that you’re researching. So you pay attention to as many data streams as possible to get a feel for the emerging trends, where the money is flowing, social politics, etc… And from this you can start to derive estimates of where things are heading.

Typically this activity is followed by many different methods of analysis. You might talk to experts in the field, you might use different types of axial analysis, eg ubiquitous vs. niche, social vs. individual. Then you consider how the trends you’re looking at would manifest through different aspects of the world. STEEP & DEGEST are common methodologies – these are just acronyms, eg STEEP: Social, technological, economic, environmental, political. Then typically we’ll all work together to share our forecasts and brainstorm around the core narratives. Now, again, forecasting is about exploring probability space and collapsing down what is possible into what is likely. So a Forecast may be “Climate change will impact water and food”. The scenarios for this forecast then look at different tracks. So a positive scenario would look at trends in technology for growing stronger food, recapturing water, and desalination, suggesting how we might overcome the problem with enough concerted effort. Conversely, a collapse scenario would consider the outcome of rapid and severe climate change, more fighting than cooperation, major migration, and the challenges of adaptation once mitigation is no longer possible. We might do 4 or 5 of these different scenarios to model different outcomes based on the prevailing trends.

In this manner, you provide both a narrative of what the future may hold, good & ill, as well as possible paths towards engineering the positive future and avoiding the negative.

chris arkenberg

So you spend your time reading as much news and analysis as you possibly can on carbon and emerging powers, interview experts, and so on – then work with a group to synthesize that data into forecasts?

Essentially. Though I will typically offer my own forecasts up front then work with the group to see what the most interesting narrative threads are and how they integrate with the overall theme. I take a lot of notes, draw a lot of diagrams, and try to compile what I think is the primary set of trends.

You were also recently working on IFTF’s “When Everything is Programmable” project. What was that, and what did you learn from it?

That’s part of the Technology Horizon’s arc which focuses more on, as you’d expect, technologies and how they may impact human systems in the near future. For me it was a great opportunity. I did my BA at UCSC in Neuroscience but hadn’t really done much with it since being in tech for so long. My focus in TH was on Neuroprogramming so it was a great chance to really dive back into that subject. It was also really valuable to have a focus. I’m a systems generalist by default so I tend to hop around a lot. But I really enjoy doing a deep dive in a a particular sector and TH gave me that opportunity. It was also my first pass at working with the IFTF methodologies so I really learned a lot about their process and how the teams work together.
EEG Twitter Inteface

(Above: An EEG Twitter interface)

What is the most promising neuroprogramming development you’ve encountered and what is the most frightening development? (I realize those could be the same thing…)

Hmm… I think brain machine interface and brain computer interface have tremendous growth ahead. When I started researching the topic I thought it would be pretty sci-fi but it’s actually moving very quickly and there is a ton of R&D happening. But in general, the trend towards integrating human physiology and machine capabilities is an extraordinary field of emerging possibilities, both scary and awesome.

Perhaps the most promising advances are in medicine. There’s a lot of progress in using implants, genetic engineering, and focused transcranial magnetism to help patients suffering depression, Parkinson’s, ALS, and Alzheimer’s, as well as some of the work being done inducing spiritual experiences, creativity, and focus. Similarly, the work integrating prosthetic devices is making tremendous strides, illustrated by the recent Nat Geo cover story on bionics. It won’t be long before prosthetic limbs and artificial sense organs are as good as the original, and often can be modified to have even more functionality. So there’s a lot of hope in patching people back together after trauma & injury. And there’s a really interesting future where these mods are more common and often tuned to enhanced performance.

As far as the most sinister development, that’s hard to say at this point. DARPA is up to their usual shenanigan’s funding a lot of work around creating more effective military patrol. I’m not convinced this is totally evil so much as the inevitable march of progress in a world where warfare is still commonplace. But they’re funding a lot of research to enable patrols to have integrated communication, identification, gesture controls, voice recognition, etc. A lot of this stuff isn’t strictly implant-based BCI but it represents this ongoing trend to integrate computation and digital comm closer & closer to the human in a highly natural & intuitive way. So if you’re a patrol leader you want your silent gestures to be “visible” through the meshnet when they’re not visible by line of sight. And you might want those gestures to kick off a set of executables that push formations out to all team members. Likewise, all members benefit from HUD AR showing targets, routes, wayfinding, etc. Evils aside, it’s interesting to see these developments in an environment that has tremendous selective pressures, eg a bullet to the head if your comm fails.

So again, maybe not exactly sinister but nevertheless very indicative of the way the tech is moving. Eventually this stuff will be civilian tech. There’s all sorts of paranoia that can be summoned up around some of these developments. Having wireless implants that let you interface with a connected computer invites also sorts of control fears, freaky hacking scenarios, and general privacy issues. It’s a rich collage that will likely play out to some degree in all these areas as we move forward.

So really: how far are we from psionic brain implants?

Ha! Psionic brain implants are a sort of sci-fi possibility when you follow this trend. At some point in the future there is a high likelihood that some members of the populace will have embedded wireless devices that will translate thought into action on a device, in the cloud, or even in another augmented head. Currently this is as simple as driving a cursor with your mind but it seems inevitable that this simple interface will include some form of back-channel chat and possibly additional sensory modalities like “seeing” video in your mind’s eye or hearing remote audio. The concept of having a full web-like interface behind your eyes is probably quite a ways off given the interface requirements for such fidelity, let alone the actual user experience of navigating the web with your mind.

What sort of skills and technologies do you think it’s most important for people today to learn to live in the future?

Accept that we live in a world of great change. You have to be agile and prepared to adapt. The fundamental global systems of civilization are shifting with the impact of instantaneous communication, globalization, and ubiquitous computing. Add to this the threats of climate change and a declining fossil fuel infrastructure and you have a tremendous amount of challenges ahead. I feel it’s critical to embrace the change and try to both anticipate and design the future. The future is not yet writ so you can always influence it, perhaps now more than ever.

Along these lines, I think it’s going to be more and more critical to build local and global networks of like-minds with the capacity to design, fabricate, manufacture, and evolve socioeconomic systems. I suspect that things will get more and more local as they get increasingly globalized. I personally feel the need to learn more CAD design so I can get in on local fab and desktop manufacturing.

I also think it’s important for people to find a balance between information value & overload. Scanning is critical but it has to be boiled down to a manageable scope in order to be actually useful. There’s a real challenge to avoid the paralysis of knowing too much.

Yeah, I deal with that every day. Some days I find I can’t blog anything because I’m too overwhelmed with material to blog.

Nature. Get outside, move around, always remember the body. Take some time to let it all sink in on a subconscious level. Then you can integrate.

augmented reality facial recognition

One of your many interests is augmented reality, and you helped organize Augmented Reality Developers Camp [sic]. In the past few days I’ve linked to a couple articles on the “dark side of augmented reality” – things like using augmented reality to obscure unpleasant things from your vision, or using facial recognition software to pull up information from strangers you encounter on the streets. Is there a way that citizens of today who aren’t necessarily developers or technologists to get involved in how this technology that could effect all of us evolves?

Like all technologies, augmented reality is only as good or bad as the people who engineer it’s applications. To guide this, people can be more active in the emerging AR consortiums and communities. That’s basically what AR DevCamp is about: getting all the players together to coordinate and design with a lot of intention so that the future platform is open and interoperable. Blogging and speaking about these things is always helpful. Influence in the social web should not be under-rated. And interviewing the people who are designing the tools can offer you a chance to hold up a mirror to their perhaps unquestioned assumptions about how great and harmless AR will be.

Ultimately, the world is changing and AR will be a part of that. But like all tools, sociology, economics, and natural feedbacks will reinforce the stuff that works and weed out the stuff that fragments or puts us at risk.

Well, actually, that raises another question – could non-developers get anything out of AR DevCamp

Absolutely. Though I should say that since AR DevCamp is an open unconference each one will be different. I’m not a developer but I was keenly interested in the emerging technology, design considerations, possibilities for integrating social markups, strategies, trend analysis, etc… I found all of these things and more at our AR DevCamp. And anyone can go and propose a topic. Certainly ethical issues would be a great one and would be very well received, in my opinion.

Then why is “developer” in the title? That seems a little off-putting.

Not developer. Development. Dev is just an admittedly confusing shorthand.

My bad. But still, that implies, to me at least, that it’s an event for developers.

And that’s the general intent – to sort out the technical standardization. But again, it’s an open unconference so anything that gets proposed gets voted on as a possible topic. You’ll find that people don’t just want to talk about standards and core tech.

So maybe we’ve stumbled on to one strategy: let non-developers know they can go to AR DevCamp, and encourage other camps to change their name.

Absolutely. I encourage everyone with abiding interests or passions around AR to go to the DevCamps.

western rains

(Above: Chris’s new free EP Western Rains)

You’re also a spiritual person, and an creative person – do you ever find that your creative or spiritual side conflicts with your work as a researcher or analyst?

There’s definitely some time & schedule challenges between the creative work and research. Music production – my primary creative hobby – takes a lot of time. But for me, moving into research and forecasting is the necessary outcome both of my spiritual orientation to the world and my desire to move away from a strictly managerial/tech/engineering career.

Having said that, my general perspective of the world is changing as I start really digging into the more rational considerations of human affairs, eg energy, money, survival. It was easy to be idealistic when I was deep in the esoterica. Ultimately, the spiritual side gave me the strength to really look at the world in all it’s hideous glory. I think it’s that anchor that allows me to balance a fairly detached view of systems analysis with a deep abiding desire to see good and hope and truth prosper.

I’m also almost 39 so the dynamic of my perspectives is shifting with the attendant requirements and responsibilities that come with age. 🙂

You can’t just magic the world up into what you want. You can have to change yourself and align your will with actually producing the change you envision in the world.

What advice would you give to any would-be futurists/forecasters?

Learn about systems. You have to be able to look at all the different factors within the larger domain of research. This is, imo, one of the most fundamental and deep trends happening within the human operating system. Cradle-to-Cradle, Life Cycle Analysis, sustainably, global economics – all of these represent the need to think in terms of systems. You have to really think about all the factors, all the inputs & outputs of a given system but do so in a way the defends a manageable scope. That’s the real challenge of good research and forecasting: knowing where to set bounds on the domain so you don’t end up researching everything.

My suspicion is that forecasters will become more and more important as average business & policy folk simply won’t have the time to research the rapidly increasing amount of info available, let alone commit time to factoring out plausible futures. So it’s up to those who have a general systems orientation towards the world, people who understand holism and non-linearity and have a real passion about pattern recognition, to make sense of the world as we pass through this great transition. Forecasting and futurists should find kinship with the best science fiction writers and understand that both are really dealing with the creation of compelling narratives and that these narratives are templates for change. In this respect, futurists should be empowered with the notion that they are really helping to design the future.


GSpot interview with Chris Arkenberg

Times article on The Institute for the Future

Your Future in 5 Easy Steps: Wired Guide to Personal Scenario Planning

Re-skinning the city – the dark side of augmented reality

who framed roger rabbit

Years ago, I had an idea for a futuristic pair of goggles that visually transformed homeless people into lovable animated cartoon characters. Instead of being confronted by the conscience-pricking sight of an abandoned heroin addict shivering themselves to sleep in a shop doorway, the rich city-dweller wearing the goggles would see Daffy Duck snoozing dreamily in a hammock. London would be transformed into something out of Who Framed Roger Rabbit.

What’s more, the goggles could be adapted to suit whichever level of poverty you wanted to ignore: by simply twisting a dial, you could replace not just the homeless but anyone who receives benefits, or wears cheap clothes, or has a regional accent, or watches ITV, and so on, right up the scale until it had obliterated all but the most grandiose royals.

At the time this seemed like a sick, far-off fantasy. By 2013, it’ll be just another customisable application you can download to your iBlinkers for 49p, alongside one that turns your friends into supermodels and your enemies into dormice.

Futurismic: Re-skinning the city – the dark side of augmented reality

Augmented Reality in a Contact Lens

augmented reality contacts

These visions (if I may) might seem far-fetched, but a contact lens with simple built-in electronics is already within reach; in fact, my students and I are already producing such devices in small numbers in my laboratory at the University of Washington, in Seattle [see sidebar, “A Twinkle in the Eye”]. These lenses don’t give us the vision of an eagle or the benefit of running subtitles on our surroundings yet. But we have built a lens with one LED, which we’ve powered wirelessly with RF. What we’ve done so far barely hints at what will soon be possible with this technology. […]

These lenses don’t need to be very complex to be useful. Even a lens with a single pixel could aid people with impaired hearing or be incorporated as an indicator into computer games. With more colors and resolution, the repertoire could be expanded to include displaying text, translating speech into captions in real time, or offering visual cues from a navigation system. With basic image processing and Internet access, a contact-lens display could unlock whole new worlds of visual information, unfettered by the constraints of a physical display.

IEEE Spectrum: Augmented Reality in a Contact Lens

Augmented reality – application examples video

This comes from MetaverseOne, creator of the augmented reality medical app mentioned here before.

If you’re interested in this sort of thing, be sure to check out The Headmap Manifesto (PDF) – it’s from the early 00s, but still relevant today. (I just host it here, I had absolutely nothing to do with Headmap).

© 2023 Technoccult

Theme by Anders NorénUp ↑