Wednesday, August 13, 2008

2. XML and Web 2.0

In the previous post, XML was introduced as a language that allowed computer programs to determine the meaning of data. While humans can interpret data based upon some general context, such as its position on a page or how it is used in a sentence, a computer program interprets data by using the unambiguous structure of a markup language such as XML.

How significant is the use of XML is peer-to-peer communication between computer programs? It is revolutionizing the internet. When you hear terms such as AJAX, REST, Web 2.0, news feeds, RSS, and ATOM, you are basically talking about using the internet to allow isolated computer programs to access remote data, interpret it, and use the data as its fodder.

A computer program can scan thousands of items from the internet, all formatted in a well-defined XML format, and mash the data up into a single meaningful report that it delivers to its owner. A program working for the Securities and Exchange Commission is able to read thousands of earnings reports, all formatted in an XBRL (XML for financial reports), and use artificial intelligence techniques to determine which companies are financially stressed. A busy executive could have his own tailor-made search engine that scans and flags internet news articles that are relevant to his marketing strategies. By using the power of the computer program, users are able to magnify the amount of information that they can used from the internet.

HTML reading browsers made the first generation of the web accessible to the average human, but its power was limited by the data volume that could be consumed by the human eye and brain. Web 2.0, on the other hand, is making the web more accessible to the machine, allowing the machine to consume orders of magnitude more data.

Web 2.0 and XML are providing the ultimate consumer of data, the human, with the ability to utilize huge volumes of data that are first interpreted and summarized by computer programs.

No comments: