You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Code for reproducing the paper Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction to appear at The 7th Workshop on Noisy User-generated Text (W-NUT) organized at EMNLP 2021.
This repository presents a gemstone classification project employing Transfer Learning with MobileNetV2, processing a dataset comprising 3200+ images spanning 87 classes. TensorFlow and Keras facilitated data preprocessing, augmentation, and model training. Through fine-tuning and leveraging pre-trained features.
Explore the rich flavors of Indian desserts with TunedLlavaDelights. Utilizing the in Llava fine-tuning, our project unveils detailed nutritional profiles, taste notes, and optimal consumption times for beloved sweets. Dive into a fusion of AI innovation and culinary tradition
Unser GitHub-Repository fördert die Entwicklung von GPT für die Pflegebegutachtung, um Genauigkeit und Effizienz in der Pflege zu verbessern. Es bietet spezialisierte Datensätze, Benchmarking-Tools und Validierungscodes für Innovatoren von KI und Pflege. Beteiligen Sie sich, um die Pflegebegutachtung durch Technologie voranzutreiben.
This Midjourney prompt generator makes digital creators life easier by generating some specific prompts for Midjourney which enables them to generate more accurate and realistic images as per their needs.
A open-source framework designed to adapt pre-trained Language Models (LLMs), such as Llama, Mistral, and Mixtral, to a wide array of domains and languages.
It is a simple machine learning algorithm to convert Grayscale Images to Colored Images. It uses VGG-16 model. We have finetuned this model and made it accurate for this algorithm to convert images from Grayscale to RGB.
Fine Tuning is a cost-efficient way of preparing a model for specialized tasks. Fine-tuning reduces required training time as well as training datasets. We have open-source pre-trained models. Hence, we do not need to perform full training every time we create a model.