- Variety – Data that has many sources and structures/formats
- Velocity – Data coming in very quickly that is defined by its flow rate and/or accumulation
- Volume – Terabytes, petabytes and zettabytes of data…data increasing at 40% annually
This description gives the concept some clarity. However, a much more pragmatic definition can be found on Wikipedia (The irony, of course, is that Wikipedia itself is the ‘Big Data version’ of the Encyclopedia Britannica, disrupted by this very definition):
In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools.
By defining Big Data by the challenge of using on-hand tools, we acknowledge that the term means something different to everyone. If you’re Nielsen and have been managing massive quantities of marketing intelligence for decades, Big Data isn’t a term you even use. If you’re a retailer trying to compete with the likes of Macys and Nordstrom, Big Data is a significant risk to your business. This is a very, very important point to make. Big Data is something different to everyone and should be.
Tip of the iceberg
And because it means something different to everyone, the applications necessary to work out Big Data problems vary by situation. In fact, the biggest ‘tool’ for solving Big Data challenges is the infrastructure that collects, sorts, and serves up data. The best phrase to clarify this point: Most Big Data applications are just the ‘tip of the iceberg’ for solving the problem.
And beyond bringing data to the table, that same infrastructure is key to following through on the insights that organizations gain from analyzing data. Without it, event processing, workflow, customer interaction and everything else that makes business ‘work’ isn’t possible.
This may be a disappointing piece of news, especially those whose livelihood depends on the hype. The reality is that Big Data solutions come back to the same technology fundamentals that have always mattered.