Sample codes and guidelines on how to finetune any opensource GPT models using #deepspeed and #huggingface
-
Updated
Mar 31, 2023
Sample codes and guidelines on how to finetune any opensource GPT models using #deepspeed and #huggingface
This repository presents a gemstone classification project employing Transfer Learning with MobileNetV2, processing a dataset comprising 3200+ images spanning 87 classes. TensorFlow and Keras facilitated data preprocessing, augmentation, and model training. Through fine-tuning and leveraging pre-trained features.
Explore the rich flavors of Indian desserts with TunedLlavaDelights. Utilizing the in Llava fine-tuning, our project unveils detailed nutritional profiles, taste notes, and optimal consumption times for beloved sweets. Dive into a fusion of AI innovation and culinary tradition
Tryna Create a Digital Version of Me via StableDiffusion - LoRA
Unser GitHub-Repository fördert die Entwicklung von GPT für die Pflegebegutachtung, um Genauigkeit und Effizienz in der Pflege zu verbessern. Es bietet spezialisierte Datensätze, Benchmarking-Tools und Validierungscodes für Innovatoren von KI und Pflege. Beteiligen Sie sich, um die Pflegebegutachtung durch Technologie voranzutreiben.
T5-base model is finetunned for abstract2title
AlgoTrading101 Warren Buffett Chatbot with ChatGPT and OpenAI Whisper
Code for reproducing the paper Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction to appear at The 7th Workshop on Noisy User-generated Text (W-NUT) organized at EMNLP 2021.
A Human Computation Based Dream-Interpreter using GPT-3
Elemental Planes Image Collection Enhancement- Earn Sage Points for cleaning up images.
Classification of flowers , by finetuning Resnets and Inception models also image augmentation and random image erasing
Scalable Protein Language Model Finetuning with Distributed Learning and Advanced Training Techniques such as LoRA.
This project will help to write new scripts for YouTube videos from scratch using fine-tuned Llama-7B
SDXL finetuning using kohya-ss LoRA method.
Deeplearning utils for multimodal research
Code for our paper "Transfer Learning for Sequence Generation: from Single-source to Multi-source" in ACL 2021.
Praetor is a lightweight finetuning data and prompt management tool
Fine tuning chatbot
FlipDrip AI, your personalized fashion destination! Discover the latest trends tailored just for you.
A tool for AMR gene family prediction, simple and ML-based
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."