In my briefing to the EU guys about the "data challenge", I have talked about IBM's view on "big data", recently Arvind Krishna, the IBM General Manager of the Information Management division, talked in the Almaden centennial colloquium about the 4Vs of big data. The first 3 Vs have been discussed before:
The regular slides are talking about volume (for data in rest) and velocity (for data in motion), but I think that we need velocity to process sometimes also data in rest (e.g. Watson), and we need sometimes also to process high volume of moving data; the variety stands for poly-structured data (structured, semi-structured, unstructured).
The veracity --- deals with uncertain/imprecise data. In the past there was an assumption that this is not an issue, since it would be possible to cleanse the data before using it, however, this is not always the case. In some cases, due to the need of velocity in moving data, it is not possible to get rid of the uncertainty, and there is a need to process data with uncertainty. This is of course true when talking about events, uncertainty in event processing is a major issue still need to be conquered. Indeed among the four Vs, the veracity is the one which is least investigated so far. This is one of the areas we investigate, and I'll write more about it in later posts.