BIDMach: Machine Learning at the Limit with GPUs
Deep learning has made enormous leaps forward thanks to GPU hardware. But much Big Data analysis is still done with classical methods on sparse data. Tasks like click prediction, personalization,...
View ArticleGPU-Accelerated R in the Cloud with Teraproc Cluster-as-a-Service
Analysis of statistical algorithms can generate workloads that run for hours, if not days, tying up a single computer. Many statisticians and data scientists write complex simulations and statistical...
View ArticleMapD: Massive Throughput Database Queries with LLVM on GPUs
Note: this post was co-written by Alex Şuhan and Todd Mostak of MapD. At MapD our goal is to build the world’s fastest big data analytics and visualization platform that enables lag-free interactive...
View ArticleWhat to Do with All That Bandwidth? GPUs for Graph and Predictive Analytics
Figure 1: Graph algorithms exhibit non-locality and data-dependent parallelism. Large graphs, such as this map of the internet, represent billion-edge challenges to existing hardware architectures. Did...
View ArticleAccelerate Recommender Systems with GPUs
Wei Tan, Research Staff Member at IBM T. J. Watson Research Center shares how IBM is using NVIDIA GPUs to accelerate recommender systems, which use ratings or user behavior to recommend new products,...
View ArticleGOAI: Open GPU-Accelerated Data Analytics
Recently, Continuum Analytics, H2O.ai, and MapD announced the formation of the GPU Open Analytics Initiative (GOAI). GOAI—also joined by BlazingDB, Graphistry and the Gunrock project from the...
View ArticleSeven Things You Might Not Know about Numba
One of my favorite things is getting to talk to people about GPU computing and Python. The productivity and interactivity of Python combined with the high performance of GPUs is a killer combination...
View Article