deep-learning

Graph Machine Learning Explainability with PyG

Graph Machine Learning Explainability with PyG

0
Graph Neural Networks (GNNs) have become increasingly popular for processing graph-structured data, such as social networks, molecular graphs, and knowledge graphs. However, the complex nature of graph-based data and the non-linear relationships between nodes in a graph can make it difficult to understand why a GNN makes a particular prediction. With the rise in popularity of Graph Neural Networks, there also came an increased interest in explaining their predictions. Link
How to replicate ChatGPT with Langchain and GPT-3?

How to replicate ChatGPT with Langchain and GPT-3?

0
It is well-known that ChatGPT is currently capable of impressive feats. It is likely that many individuals have ideas for utilizing this technology in their own projects. However, it should be noted that ChatGPT does not currently have an official API. Using an unofficial API may result in difficulties. Link
LangChain Hub

LangChain Hub

0
Taking inspiration from Hugging Face Hub, LangChainHub is collection of all artifacts useful for working with LangChain primitives such as prompts, chains and agents. The goal of this repository is to be a central resource for sharing and discovering high quality prompts, chains and agents that combine together to form complex LLM applications. Link
PyTorch

PyTorch

0
PyTorch Lightning & Lightning Hydra Template are neat. Tensor Puzzles is great intro. TorchScale is nice library for making Transformers efficiently. BackPACK is interesting too. Link
TRL - Transformer Reinforcement Learning

TRL - Transformer Reinforcement Learning

0
With trl you can train transformer language models with Proximal Policy Optimization (PPO). The library is built on top of the transformers library by 🤗 Hugging Face. Therefore, pre-trained language models can be directly loaded via transformers. At this point most of decoder architectures and encoder-decoder architectures are supported. Link
Writing a tokenizer with ChatGPT

Writing a tokenizer with ChatGPT

0
This morning I decided to test how good ChatGPT is at generating a non-trivial piece of code. I want to write a complete interpreter along the lines of Robert Nystrom’s excellent book Crafting Interpreters. Link
 Illustrating Reinforcement Learning from Human Feedback (RLHF)

Illustrating Reinforcement Learning from Human Feedback (RLHF)

0
Language models have shown impressive capabilities in the past few years by generating diverse and compelling text from human input prompts. However, what makes a “good” text is inherently hard to define as it is subjective and context dependent. There are many applications such as writing stories where you want creativity, pieces of informative text which should be truthful, or code snippets that we want to be executable. Link Github