The training/inference processes of deep learning models are involved lots of steps. The faster each experiment iteration is, the more we can optimize the whole model prediction performance given limited time and resources. I collected and organized several PyTorch tricks and tips to maximize the efficiency of memory usage and minimize the run time. To better leverage these tips, we also need to understand how and why they work.
Link
Learn how to use PyTorch, Monai, and Python for computer vision using machine learning. One practical use-case for artificial intelligence is healthcare imaging. In this course, you will improve your machine learning skills by creating an algorithm for automatic liver segmentation.
Link
Throughout the last 10 months, while working on PyTorch Lightning, the team and I have been exposed to many styles of structuring PyTorch code and we have identified a few key places where we see people inadvertently introducing bottlenecks.
Link
The T5 Transformer frames any NLP task as a text-to-text task enabling pre-trained models to easily learn new tasks. Let’s teach the old dog a new trick!
Link
Stop writing boilerplate code, struggling with authentication and managing infrastructure. Start connecting APIs with code-level control when you need it — and no code when you don’t.