Now showing items 1-4 of 4
Scalable Scientific Computing Algorithms Using MapReduce
(University of Waterloo, 2013-09-03)
Cloud computing systems, like MapReduce and Pregel, provide a scalable and fault tolerant environment for running computations at massive scale. However, these systems are designed primarily for data intensive computational ...
PStorM: Profile Storage and Matching for Feedback-Based Tuning of MapReduce Jobs
(University of Waterloo, 2013-01-02)
The MapReduce programming model has become widely adopted for large scale analytics on big data. MapReduce systems such as Hadoop have many tuning parameters, many of which have a significant impact on performance. The map ...
Enhancing Data Processing on Clouds with Hadoop/HBase
(University of Waterloo, 2011-10-12)
In the current information age, large amounts of data are being generated and accumulated rapidly in various industrial and scientific domains. This imposes important demands on data processing capabilities that can extract ...
Implementations of iterative algorithms in Hadoop and Spark
(University of Waterloo, 2014-07-29)
Facing the challenges of large amounts of data generated by various companies (such as Facebook, Amazon, and Twitter), cloud computing frameworks such as Hadoop are used to store and process the Big Data. Hadoop, an open ...