nlp

Generative Question-Answering with Long-Term Memory

Generative Question-Answering with Long-Term Memory

0
Generative AI sparked several “wow” moments in 2022. From generative art tools like OpenAI’s DALL-E 2, Midjourney, and Stable Diffusion, to the next generation of Large Language Models like OpenAI’s GPT-3.5 generation models, BLOOM, and chatbots like LaMDA and ChatGPT. Link
Question Answering using Embeddings (OpenAI)

Question Answering using Embeddings (OpenAI)

0
Many use cases require GPT-3 to respond to user questions with insightful answers. For example, a customer support chatbot may need to provide answers to common questions. The GPT models have picked up a lot of general knowledge in training, but we often need to ingest and use a large library of more specific information. Link Examples
quickchat.ai

quickchat.ai

0
Quickchat is a human-like AI assistant that provides accurate and instant answers to customer questions. Link
Reinforcement Learning for tuning language models ( how to train ChatGPT )

Reinforcement Learning for tuning language models ( how to train ChatGPT )

0
The Large Language Model revolution started with the advent of transformers in 2017. Since then there has been an exponential growth in the models trained. Models with 100B+ parameters have been trained. These pre-trained models have changed the way NLP is done. It is much easier to pick a pre-trained model and fine-tune it for a downstream task ( sentiment, question answering, entity recognition etc.. ) than training a model from scratch.
Trudo.ai

Trudo.ai

0
Fine tuning NLP models (GPT-3/ChatGPT) Link Fine-Tuning GPT-3/ChatGPT and Zapier Integration: A Tutorial for No Code OpenAI Developers
VALL-E

VALL-E

0
An unofficial PyTorch implementation of VALL-E, based on the EnCodec tokenizer. Link