I'm in the process of experimenting with Mozilla Ubiquity for Firefox. This is a really cool way to extend the experience of interacting with the WWW. I'd try to explain what it does but honestly, I am not that smart. In a nutshell, you can give your browser plain English commands to access various components of different online tools at the same time.
For instance, lets say I want to twitter about this blog post as I am writing it...I simply select some text and hit control + space then type 'twitter this selection'. Wild.
The real interest to me is this: What does the eventual, wide spread adoption of this technology mean for the web interface? Assuming voice recog or mind recog become viable tools, software like this represents a very powerful way to speed up typical use case scenarios.
Something interesting I noticed is this: As developer I am well aware of the unseen pipes and connections between applications humming across the WWW that go largely unused between different services. This creates little walls around each application...I have to go to this site to get something and then go to that site to use it. No surprises. However, as a user the idea of knocking these barriers down in such a usable way still seems completely novel.