The training/inference processes of deep learning models are involved lots of steps. The faster each experiment iteration is, the more we can optimize the whole model prediction performance given limited time and resources. I collected and organized several PyTorch tricks and tips to maximize the efficiency of memory usage and minimize the run time. To better leverage these tips, we also need to understand how and why they work.
Link
Learn how to use PyTorch, Monai, and Python for computer vision using machine learning. One practical use-case for artificial intelligence is healthcare imaging. In this course, you will improve your machine learning skills by creating an algorithm for automatic liver segmentation.
Link
Today we’ll dive into the theory and implementation of the Graph Attention Network (GAT). In a nutshell: attention rocks, graphs rock, GAT’s authors rock!
Link
TorchMetrics is a really nice and convenient library that lets us compute the performance of models in an iterative fashion. It’s designed with PyTorch (and PyTorch Lightning) in mind, but it is a general-purpose library compatible with other libraries and workflows.
Link
Deepchecks is the leading tool for validating your machine learning models and data, and it enables doing so with minimal effort. Deepchecks accompanies you through various validation needs such as verifying your data’s integrity, inspecting its distributions, validating data splits, evaluating your model and comparing between different models.
Link
Physics-inspired continuous learning models on graphs allow to overcome the limitations of traditional GNNs
The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a…
Link
A collection of Variational AutoEncoders (VAEs) implemented in pytorch with focus on reproducibility.
The aim of this project is to provide a quick and simple working example for many of the cool VAE models out there. All the models are trained on the CelebA dataset for consistency and comparison.
The architecture of all the models are kept as similar as possible with the same layers, except for cases where the original paper necessitates a radically different architecture (Ex.