Not even a decade ago, machine learning was a profession for an elite few. Nowadays, we don't have to be math or engineering wizards to implement state-of-the-art predictive models. Advances in computing hardware, and especially the utilization of GPUs for training deep neural networks, make it feasible to develop predictive models that achieve human-level performance in various natural language processing and image recognition challenges. The manifold software layers and APIs that are allowing us to utilize these hardware resources are becoming ever so convenient. In this talk, I will highlight the research and technology advances and trends of the last year(s), concerning GPU-accelerated machine learning and deep learning, and focusing on the most profound hardware and software paradigms that have enabled it.