We all love chat.openai.com, but…
It’s TERRIBLY laggy, has daily limits, and is only accessible through an archaic web interface.
This repo is ChatGPT re-created with GPT-3.5 LLM as Telegram Bot. And it works great.
Link
This document is for engineers and researchers (both individuals and teams) interested in maximizing the performance of deep learning models. We assume basic knowledge of machine learning and deep learning concepts. Link
In this video, we cover the purpose and structure of the Full Stack Deep Learning 2022 course and talk about when ML is the right tool to solve your problems.
Link
LangChain
Generative AI sparked several “wow” moments in 2022. From generative art tools like OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion, to the next generation of Large Language Models like OpenAI’s GPT-3.5 generation models, BLOOM, and chatbots like LaMDA and ChatGPT.
Link
LangChain is a popular framework that allows users to quickly build apps and pipelines around Large Language Models. It integrates directly with OpenAI’s GPT-3 and GPT-3.5 models and Hugging Face’s open-source alternatives like Google’s flan-t5 models.
Link
We’re excited to announce our new service offering: GPT-3 fine tuning as a service. If you’re looking to achieve better results, reduce latency, and save costs on a wide range of natural language processing (NLP) tasks, we’re here to help.
Link
If you’ve spent any time with GPT-3 or ChatGPT, you’ve likely thought about how useful it would be if you could point them at a specific, current collection of text or documentation and have it use that as part of its input for answering questions.
Link
Many use cases require GPT-3 to respond to user questions with insightful answers. For example, a customer support chatbot may need to provide answers to common questions. The GPT models have picked up a lot of general knowledge in training, but we often need to ingest and use a large library of more specific information.
Link
Examples