ChatGPT came out a few months ago and blew everyones’ minds with its ability to answer questions sourced from a broad set of knowledge. Around the time that ChatGPT was demonstrating how powerful large language models could be, the Dagster core team was facing a problem.
Link
It is well-known that ChatGPT is currently capable of impressive feats. It is likely that many individuals have ideas for utilizing this technology in their own projects. However, it should be noted that ChatGPT does not currently have an official API. Using an unofficial API may result in difficulties.
Link
Taking inspiration from Hugging Face Hub, LangChainHub is collection of all artifacts useful for working with LangChain primitives such as prompts, chains and agents. The goal of this repository is to be a central resource for sharing and discovering high quality prompts, chains and agents that combine together to form complex LLM applications.
Link
With trl you can train transformer language models with Proximal Policy Optimization (PPO). The library is built on top of the transformers library by 🤗 Hugging Face. Therefore, pre-trained language models can be directly loaded via transformers. At this point most of decoder architectures and encoder-decoder architectures are supported.
Link
This morning I decided to test how good ChatGPT is at generating a non-trivial piece of code. I want to write a complete interpreter along the lines of Robert Nystrom’s excellent book Crafting Interpreters.
Link
Language models have shown impressive capabilities in the past few years by generating diverse and compelling text from human input prompts. However, what makes a “good” text is inherently hard to define as it is subjective and context dependent. There are many applications such as writing stories where you want creativity, pieces of informative text which should be truthful, or code snippets that we want to be executable.
Link
Github