Big Data Architectures - handling too big, too fast or too versatile data
- Sven Johann
- Track abstract
- Organizations increasingly must deal with petabyte-scale collections of data that come from click streams, transaction histories, sensors, and elsewhere. These data are not only big, they must also be processed quickly to perform fraud detection at a point of sale or determine which ad to show. They come from multiple sources in various formats and do not fit neatly into existing processing or analysis tools. This track explores the fundamentals, design considerations and advanced case studies of systems based on the Hadoop ecosystem and related technologies as well as general architectural patterns to solve Big Data problems.
|Day||Friday (7th Mar.)|