Beyond Big Data – A Small and fast data example

Big Data is all the rage. Everywhere you go, any meeting or presentation one sits through, Big Data seems to be there.

But there are opportunities beyond big data. E.g. how we handle small data fast.

Here’s an example of small data that I experience almost everyday.

In many corporate buildings in India, you would notice that you need to punch in the desired floor into a panel, which prompts you which lift-car to hop on to.

Small Fast DataSimple yet brilliant solution.

You club the waiting passengers into specific cars by their desired floors. The average wait time is lower, the average travel time to your floor is significantly lower.

And all the magic happens in a jiffy.

The data becomes irrelevant soon (apart from being used by the algorithm for learning and further optimization). And in a classic example of not-so-big-data. But the fact that this small set of inputs from users is taken, crunched and optimized for lift-car allocation in almost real time makes it so amazing.

Small but fast Data.

In Data-led-solutions, the following typically have significant impact:

  • Data-accuracy: How accurate is the data that we feed into the system
  • Data-freshness:How fresh is the data as it moves across the value chain
  • Data-velocity: What is the speed with which we process and move data

This small-data use-case is very high on the data-velocity parameter and I guess just solving for data-velocity has allowed for the solution to be adopted.

So in summary, there is life beyond Big Data too. 🙂

What do you think?

Big Data, medicine and Dr Gregory House

I am a big fan of House. BIG FAN !

I guess what I really like is the infectious curiosity of Dr Gregory House and the very extreme personalities of the characters at this clinic.

In my overly simplistic understanding of medicine, there are two parts to it – diagnosis and treatment.

And our Dr House excels at the former. He is able to connect the seemingly unrelated symptoms with behavioral patterns, genetic history and what not. And it is fascinating how the power of inference and hypotheses testing leads his team to find the true nature of what really inflicts the patients.


What I wonder is that this might have been a super impressive set of scripts way back in 2004-2012 (yes the series is that old), but would it be as impressive in today’s age and time.

Experts say that Big Data or analytics is effective if the 3Vs are encouraging – volume, velocity and variety. In case of healthcare they talk about a 4th V – veracity or data accuracy.

Increasing number of our records are getting digitized – so the volume surely is exploding.

Wearables that track real time pulse, Blood pressure etc are already an expected norm. Patient specific information is definitely flowing at a very high velocity within the healthcare eco-systems.

Variety of data, might get addressed when data-sets from individual data-networks (like EMR co’s, hospitals, health insurers etc) are purged for confidential and individual data and insights shared on a common platform/network.

Given all these indicators, do you see a time when Big Data scientists will put a Dr Gregory House in every hospital that can afford such systems? The trick might be to find the best set of tests to conduct to confirm the existence of disease(s), and this is something that machine-taught algorithms can do nicely.

Would machine learning really help connect dots in the medical outliers?