Skip to main content

Overvinne Zettabytes; innblikk i et hav av data

Blogginnlegg   •   feb 26, 2013 11:31 CET

By Steve Cardell: Today, 5 billion Google searches take place, 1 billion items of content is  shared on Facebook, 294 billion e-mails  are  sent and 200 million tweets tweeted.  There are approximately 190 million business organizations, each processing an average of 63 terabytes of information a year. All this combined, the latest estimates suggest that the planet’s approximately 30 million servers is likely to process around 1.8 zettabytes of data (that’s ten to the 22nd power) this year. And that volume is expected to double every eighteen months.

So the abundance, ease-of-access and dramatic increase of computing power in the past few years mean that organizations will surely have every piece of available information needed to make incisive business decisions and deploy strategy effectively right at their fingertips. Right? Apparently not.
 It appears that the ‘wood for the trees’ analogy simply gets harder as the amount of available information gets greater. Let’s consider the basic realities. Lodestar Research found that only 17% of companies have asingle view of the customer. Apparently, fewer than half of the companies are confident that they know precisely how many employees they have at any point in time. A Nucleus Research study found that 55% of company data was inaccurate and that information used for decision making was on average 14 months old.

It is perhaps not a surprise that an Avanade survey found that 56% of  executives feel overwhelmed by the amount of data their company manages and that important decisions are delayed as a result of too much information. Not surprisingly, 55% also reported their IT infrastructure was under strain due to the growth of data and 47% confirmed that security is now a real challenge due to the volume and multiple sources of data entering their organizations. And yet organizations are throwing money at this problem… big money… in the order of $70 billion a year according to Deloitte. But we don’t seem to be getting there.

An interesting, although perhaps not surprising, finding of the Avanade survey was that small enterprises (less than 1000 employees) rated their ability to differentiate relevant data as more than 10% higher than larger enterprises.  Almost half the large enterprise  executives admit they have made bad decisions due to poor data.  So it seems that for larger enterprises, data crosses a threshold of becoming unmanageable, and so often small businesses, even with limited resources, benefit from timely, incisive information. More powerful  executives in larger corporates are disabled by late, irrelevant and inaccurate information, and yet the consequences of their decisions are much more profound... financially profound. According to Ovum, the inefficiency and customer problems caused by bad data costs the US economy $700 billion per year  or 5% GDP. Assuming this is equally a global challenge, then the world has a $3-trillion problem …and it’s growing.

How do we respond to this challenge? Well, perhaps Albert Einstein has a good reminder for us in thinking through our options when he defined lunacy as ‘continuing to do the same thing but expecting a different result’. Here are three thoughts:
 

  1. Lower your expectation: Choice can be a dangerous thing, especially for company executives. If you’ve ever eaten at an all-you-can-eat buffet, you’ll know what I mean. When it’s all available and it’s all free, you suddenly decide that things you didn’t know you wanted, you now need. And so it is with data. With so much of it around, it’s no wonder that  executives feel overwhelmed. And yet despite this, two-thirds still want more, and want it faster. We can learn something from e-mail in this regard. It is estimated that 60-70% of all e-mails sent in a day are unwanted or unhelpful. They hinder productivity, not aid it. And so we have seen the rise of e-mail protocols and invitations to feedback on whether a particular e-mail was helpful or not by its reader. Data has become like a drug. No matter how much you have, you want more. The easiest response from an executive faced with a decision is “go get me more data”. And so, an entire industry has grown up, in which despite all of our systems, Excel still seems to be the only true enterprise reporting tool.We have to begin by recognizing that data and information is a company resource. Like any other, we expend cost, time and focus to acquire it. It is estimated, by the way, that less than 10% of data created by companies reporting teams is ever read or used. Challenge what is needed to make business decisions and begin by removing swathes of requirements.
  2. Be interventionist, not systemic: Most companies attempt to fix the data problem in a systemic manner. Numerous data management projects are in flight across the globe right now. Data dictionaries are being written; controls and authority matrices to define what data can and cannot be entered into systems. Enterprise system data models are being re-architecting. While these processes do have their place, sometimes it is better to recognize when a problem in insoluble. Given  the large amount of data within an organization and recognizing that on average 55% of it is wrong today, we are not likely create a world where all data input is accurate. So rather than trying to systemically solve the issue (and continuing the period of inaccurate reporting) we are better off investing our time on data extraction processes. We need to think about the information we will collect and where (which system) we will collect it from? Don’t try to make the whole data set correct; be interventionist about what data you pull from that set and where to use it. This delivers very quick wins, and tends to hit 80% of the data problem from the outset. If you are running a systemic data project, where the project leader tells you that in 2 years and after $20M there’s a good chance you’ll get your data needs met…change track.
  3. Don’t drink from the ocean: If you’ve ever seen a desalination plant in action you will know that they are pretty impressive chemical systems. In a middle-eastern region, people died of thirst while being surrounded by oceans. This reminds me of our data challenge. Data, data all around – but nothing useful for business decision making. And so, the solution. Desalination plants take tiny percentages of the ocean’s water and  produce drinkable water…useful water. When Bob Kaplan and David Norton developed the Balanced Scorecard concept back in 1992, they tried to persuade senior executives that they needed only 20-25 key performance metrics to run their organizations. Apparently, the average executive is actually given 2000-5000. While Kaplan and Norton may have been overly optimistic, sub-50 measures is probably true. So build your desalination plant (a separate business intelligence database) for those 50. Be ruthless in data accuracy (controlled through extraction rules) for whatever feeds those 50. Be real time. Be predictive with them. Employ operational research PhDs to correlate them. Stop looking for more metrics—define the ones that help you guide the strategic direction of the organization and learn how to interpret them and become predictive on cause and effect in using data to set the course ahead, not just to report on the journey travelled.

I am reminded of Bob, the owner of a small manufacturing business who made white label tubes for consumer products (toothpaste tubes, shampoo bottles and the like). Bob expected to know productivity by machine and operator, he wanted to know waste, he wanted to know bought-in-cost of power and raw plastic, he wanted to know the percentage of production on new products less than 90 days old and he wanted to know order-fill-rate of his customer orders. He wanted to know staff sickness, overtime and average pay rate. He wanted to know logistics cost per mile. And he wanted that information on his desk at the end of every working day. OK, so Bob only employed 50 people, and worked from one manufacturing site. But it seems to me, for those leading much larger and more global enterprises, the goal is still the same. We need to build our data strategy to allow us to simply and easily get back to Bob’s world. After all, there is $3 trillion at stake… which, by the way, is bigger than the GDP of France.

About Steve Cardell: Steve joined HCL Technologies through its acquisition of Axon Group PLC, the FTSE-250 London-listed enterprise technology company, where Steve was the Chief Executive Officer. At HCL Steve is responsible for the Enterprise Applications Services (EAS) business, which encompasses the Oracle, SAP and Microsoft practices, as well as Public Services, which encompasses the Public Sector, Utilities, Energy, Travel & Transportation and Aerospace & Defence verticals. Combined, Steve looks after $1.5 billion of HCL’s operations. Steve holds business qualifications from Bath and London Business Schools.