It's all the rage in blogging circles, but I've yet to grasp the practical implications of
Web2.0.
Semantic coding of information to provide contextual meaning makes sense, and ought to be happening already.
I already use
RSS feeds to get news headlines on my Yahoo page from sources that interest me, and I intend (once I work out how to do it) to provide an RSS feed for this blog so that anyone who chooses can be alerted to a new post rather than have to visit here just in case I've written something.
Amazon and other sites that paste relevant content from databases into an interface are great. I also see the benefits of Flickr, Wikipedia and eBay, where users provide content, tagging and votes that make up reputations. Google AdWords make sense by providing targeted, relevant advertising, and I can see how this could be translated into local advertising on GoogleMaps.
An end to the eternal cycle of software upgrades would be even better, when or if applications are web-based rather than PC-based.
So far, so good. What I don't understand, however, is how this will affect 'traditional' websites. There is a huge amount of information on the Web that is relatively static, and there's nothing wrong with that. Archived information is an important part of the Web, and websites that break links to such information by moving it are a blight on the Internet (even classified as one of Jakob Neilsen's great
web design mistakes).
I haven't found anyone yet who explains or describes what may happen to the multitude of small, informative sites. Am I being thick? Or does nobody know yet? There's a lot of discussion about the new browser
Flock, based on Mozilla, with its integration with
del.icio.us and Flickr. Other services are promised soon.
Some discuss the possibilities it offers, while
others are not convinced.
Flock is still being developed, but I have installed it, and, with slight trepidation, will investigate and report back.