Wednesday, March 3, 2010

The Data deluge

Interesting article from the Economist here.

Those into marketing analytics, business intelligence, secondary data analysis in general should definitely read it, IMHO.

Everywhere you look, the quantity of information in the world is soaring. According to one estimate, mankind created 150 exabytes (billion gigabytes) of data in 2005. This year, it will create 1,200 exabytes. Merely keeping up with this flood, and storing the bits that might be useful, is difficult enough. Analysing it, to spot patterns and extract useful information, is harder still. Even so, the data deluge is already starting to transform business, government, science and everyday life (see our special report in this issue). It has great potential for good—as long as consumers, companies and governments make the right choices about when to restrict the flow of data, and when to encourage it.

Intrigued? Read on....

Mobile-phone operators, meanwhile, analyse subscribers’ calling patterns to determine, for example, whether most of their frequent contacts are on a rival network. If that rival network is offering an attractive promotion that might cause the subscriber to defect, he or she can then be offered an incentive to stay. Older industries crunch data with just as much enthusiasm as new ones these days. Retailers, offline as well as online, are masters of data mining (or “business intelligence”, as it is now known). By analysing “basket data”, supermarkets can tailor promotions to particular customers’ preferences. The oil industry uses supercomputers to trawl seismic data before drilling wells. And astronomers are just as likely to point a software query-tool at a digital sky survey as to point a telescope at the stars.

With time (say in another decade) India too shall become data friendly. Right now collection, collation and compilation of data into analyzable volumes is negligible from a business observer's POV. That should change a lot with prices plunging for the basic data collection infra (scanner machines, websites, mobile call directories and the like) feeding into databases.

Then the fun will start. Once the data are in, the modeling can take off.

Read the article in full. Worthwhile, IMHO.

No comments:

Post a Comment

Constructive feedback appreciated. Please try to be civil, as far as feasible. Thanks.