The military is decentralizing its networks. Here’s a piece I wrote for ReadWriteWeb about it:
DARPA contracted Ratheon in 2009 to build the “Mobile to Ad-Hoc Interoperable Network GATEway” (MAINGATE), a mobile network that both military and civilian organizations can use to communicate using any radio or wireless device. The agency announced last month that the system has now been tested for video, voice and data by both high bandwidth and low bandwidth users.
A key component of MAINGATE is Network Centric Radio System (NCRS). According to Defense Industry Daily, NCRS provides: “1) a backbone radio architecture that enables IP versatile networks and 2) a radio gateway that enable legacy analog and digital communications systems to be linked together.” NCRS provides a self-healing ad-hoc mobile network that enables seamless communication between nearly any radio.
Defense Industry Daily reports that MAINGATE also features disruption-tolerant networking to cope with disruptions caused by line-of-sight issues, spectrum access, congested radio frequencies and noisy environments.
Of course the Internet was never truly free, bottom-up, decentralized, or chaotic. Yes, it may have been designed with many nodes and redundancies for it to withstand a nuclear attack, but it has always been absolutely controlled by central authorities. From its Domain Name Servers to its IP addresses, the Internet depends on highly centralized mechanisms to send our packets from one place to another.
The ease with which a Senator can make a phone call to have a website such as Wikileaks yanked from the net mirrors the ease with which an entire top-level domain, like say .ir, can be excised. And no, even if some smart people jot down the numeric ip addresses of the websites they want to see before the names are yanked, offending addresses can still be blocked by any number of cooperating government and corporate trunks, relays, and ISPs. That’s why ministers in China finally concluded (in cables released by Wikileaks, no less) that the Internet was “no threat.” […]
Back in 1984, long before the Internet even existed, many of us who wanted to network with our computers used something called FidoNet. It was a super simple way of having a network – albeit an asynchronous one.
One kid (I assume they were all kids like me, but I’m sure there were real adults doing this, too) would let his computer be used as a “server.” This just meant his parents let him have his own phone line for the modem. The rest of us would call in from our computers (one at a time, of course) upload the stuff we wanted to share and download any email that had arrived for us. Once or twice a night, the server would call some other servers in the network and see if any email had arrived for anyone with an account on his machine. Super simple.
de Jong takes Richard Stallman’s critiques of cloud computing seriously. But, he says, “People want to use websites instead of desktop apps. Why do they want that? I don’t think it’s up to us developers to tell users what to want. We should try to understand what they want, and give it to them.”
de Jong acknowledges the many advantages to running applications in the cloud: you can access your applications and data from any computer without installing software or transferring files. You can access your files from multiple devices without syncing. And web applications have better cross-platform support.
So how can you give users web applications while keeping them in control of their data?
The basic idea is this: an Unhosted app lives on a web server and contains only source code. That source code is executed on a user’s computer and encrypts and stores data on another server. That data never passes through the app server. Therefore, the app provider doesn’t have a monopoly on your data. And since that data is encrypted, it can’t be exploited by the data host either (or at least, it probably can’t).
The data can be hosted anywhere. “It could be in your house, it could be at your ISP or it could be at your university or workplace,” says de Jong.
“We had some hurdles to implement this, one being that the app cannot remember where your data lives, because the app only consists of source code,” he says. “Also your computer can’t remember it for you, because presumably you’re logging on to a computer you never used before.”
The Unhosted team solved the problem by putting the data location into usernames. Unhosted usernames look a lot like e-mail addresses, for example: firstname.lastname@example.org. Willy is the username, server.org is location where the data is stored.
I just interviewed J Chris Anderson, the CFO of CouchOne, for ReadWriteWeb. CouchOne is the corporate sponsor of an open source database and programming language called CouchDB. Anderson recently started hosting a demo/proof of concept app called Twebz – a decentralized Twitter Client – built with CouchDB and node.js. Anderson explains how CouchDB could be used to decentralize not only Twitter, but most other web applications as well. It’s pretty geeky but could have big ramifications: This tech could help build a more resilient Internet in the face of disasters, cyberwarfare and censorship.
The aim is to allow you to interact with Twitter when Twitter is up and you are online. But if Twitter is down for maintenance or you are in the middle of nowhere, you can still tweet. And when you can reach Twitter again, it will go through.
If lots of folks are using it, then they can see each other’s tweets come in even when Twitter is down.
Mostly the goal was to show the way on how to integrate CouchDB with web services and APIs.
So if you did release this, and people started using it, and then one day Twitter decided “We’re done. We’re going to go raise pigs in the Ozarks,” Twebz would actually still be up and running fine basically forever and everyone could keep reading each other’s Tweets.
Yep. And as a side effect you have a complete personal Twitter archive of the folks you follow.
There’s even a feature to pull in the complete history of a user, so you can get the back fill of your closest friends if you want. […]
Could CouchDB and Node be used in conjunction to create some sort of decentralized darknet? Something along the lines of Freenet?
Node is a good fit for CouchDB because Couch encourages asynchronous background processes, but people also use Ruby / Python / Java for the same purposes. But yes, eventually the plan is that CouchDB will make web applications a lot more robust because they will no longer depend on a centralized point of failure. E.g., even if Twitter goes out of business, people can continue to share messages.
The turnover of Web 2.0 startups is so fast that I think users get discouraged from signing up for services. Why bother with a new photo share if there’s a chance it won’t be around in a year? But when those are CouchApps, users can continue to use them even if no one is maintaining them, which makes it more rational to invest time in using them. Imagine if Pownce or Dodgeball were still being run by fans.
Over at ReadWriteWeb I take a look at the controversy surrounding the Lieberman-Collins bill:
It doesn’t sound like a “kill switch.” The bill would require the President to submit a report describing, among other things, “The actions necessary to preserve the reliable operation and mitigate the consequences of the potential disruption of covered critical infrastructure” (pg. 84 lines 1-4). That sounds like the opposite of a kill switch: this legislation describes a process by which the president is expected to take action to ensure access to “critical infrastructure” -including the Internet.
There’s plenty of room to debate the merits of the federal government dictating the security policies of private companies, the ability of the president to continually extend any provisions beyond 30 days, the value of establishing new cyber security departments within the government, and the vagueness of the language in the bill. But this is nothing nearly so radical as some are making it out to be.
In fact, as Senate Committee on Homeland Security and Governmental Affairs’ web site for the bill points out, the President already has a legislative (but of course, not technological) “kill switch.” The Communications Act of 1934 gave the president power to shut down “wire communications.”
Kevin at Grinding asks some questions about the social impediments to a post-scarcity future. He looks at the legislative restraints on P2P file sharing and wonders how that mess will play out when we’re able to copy things in meat-space:
A friend of mine who collects action figures shows me a custom mod of an Optimus Prime Transformer figure. I asked him how much it bugged him to dismantle a classic figure and he smiles and tells me he just scanned the parts he needed of his old one with a 3D scanner and built most of the new one with a 3D Printer. And that’s just one example of how 3D printing is slipping into my everyday life. We’re rapidly approaching the point where duplicating Things for a fraction of the original resources is easy – and by “rapidly approaching” I mean people you know are rapid prototyping and cloning items as we speak. It’s not too much of a jump to think we’re not that far from something resembling nano-assembling – rendering ideas like “original” meaningless. We’re exceedingly close the age where “remix culture” can remix Things with nearly the ease it can remix digital media.
But how will we react? Will we put DRM on food so it can’t be mass produced? Will we attempt to limit access to production engines? Will we allow “market forces” to keep the poor needy while the top 1% don’t even have a concept of need? Will we rush out to buy iMakers that scan the net to ensure anything you’re producing isn’t a component of a copyrighted product or recipe – or that only produce “family safe” products?
One comment at Grinding points to the fact that file sharing continues online unabated. However, ACTA could be a significant blow not only to file trading but to online freedom in general. Meanwhile, in meatspace, grocery stores are dumping bleach on food to thwart dumpster divers. There’s only so much good routing around problems can do before you must confront the fundamental problems.
In a Science article published in early 2009, prominent developmental psychologist Patricia Greenfield reviewed more than 40 studies of the effects of various types of media on intelligence and learning ability. She concluded that “every medium develops some cognitive skills at the expense of others.” Our growing use of the Net and other screen-based technologies, she wrote, has led to the “widespread and sophisticated development of visual-spatial skills.” But those gains go hand in hand with a weakening of our capacity for the kind of “deep processing” that underpins “mindful knowledge acquisition, inductive analysis, critical thinking, imagination, and reflection.”
We know that the human brain is highly plastic; neurons and synapses change as circumstances change. When we adapt to a new cultural phenomenon, including the use of a new medium, we end up with a different brain, says Michael Merzenich, a pioneer of the field of neuroplasticity. That means our online habits continue to reverberate in the workings of our brain cells even when we’re not at a computer. We’re exercising the neural circuits devoted to skimming and multitasking while ignoring those used for reading and thinking deeply.
As I said during my interview with Ashley Crawford (Pay attention here! Don’t click that link yet!), I’m that reading off more limited mobile devices like my Blackberry and my iPod touch is helping me concentrate on reading longer, more substantive material. Reading on my computer, with its tabbed browser, has a tendency to destroy my attention span.
I’m trying to discipline myself to browse first, read later – find stuff of interest by scanning through feeds, Twitter etc, and then go over the stuff I’ve flagged to read before I go back and find more stuff.
Do you have any strategies for navigating the web without destroying your attention span, or do you think that the transformation of our brains could actually be a good thing?
The biggest threat to the open internet is not Chinese government hackers or greedy anti-net-neutrality ISPs, it’s Michael McConnell, the former director of national intelligence.
McConnell’s not dangerous because he knows anything about SQL injection hacks, but because he knows about social engineering. He’s the nice-seeming guy who’s willing and able to use fear-mongering to manipulate the federal bureaucracy for his own ends, while coming off like a straight shooter to those who are not in the know. […]
He’s talking about changing the internet to make everything anyone does on the net traceable and geo-located so the National Security Agency can pinpoint users and their computers for retaliation if the U.S. government doesn’t like what’s written in an e-mail, what search terms were used, what movies were downloaded. Or the tech could be useful if a computer got hijacked without your knowledge and used as part of a botnet.
Me: You’ve said your advice for entrepreneurs is to avoid venture capital. Can you explain that a bit?
Matt: I have so many friends in the technology industry who are so obsessed with getting funded. And they’re confusing that with getting paid and it being money. People see it as free money, and it’s not. A lot of people obsessed with venture capital see Metafilter as a lifestyle business, but in my mind, it’s a mature business. It works really well and yet nobody aspires to do something like this and I don’t know why. Nobody celebrates just simple businesses that work.
Don’t take any money, don’t owe anything to anyone, build [your business] how you want instead of constantly being on that treadmill of growth growth growth.