The way that big data gets big is through a constant stream of incoming data. In high-volume environments, that data arrives at incredible rates, yet still needs to be analyzed and stored.

John Hugg, software architect at VoltDB, proposes that instead of simply storing that data to be analyzed later, perhaps we’ve reached the point where it can be analyzed as it’s ingested while still maintaining extremely high intake rates using tools such as Apache Kafka.

– Paul Venezia

Resources
Post Your Resume to 65+ Job Sites
Resume Service

Post to Twitter Tweet This Post