Inferless
Popular repositories
-
stable-video-diffusion
stable-video-diffusion Public template(SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a contex…
-
-
dolphin-2.5-mixtral-8x7b-GPTQ
dolphin-2.5-mixtral-8x7b-GPTQ PublicThis Dolphin is really good at coding, it has been trained with a lot of coding data. It is very obedient but it is not DPO tuned - so you still might need to encourage it in the system prompt.
Repositories
- Phi-3-128k Public
- Llama-2-13b-hf Public
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
- Llama-2-7b-chat Public template
Llama 2 7B Chat is the smallest chat model in the Llama 2 family of large language models developed by Meta AI. This model has 7 billion parameters and was pretrained on 2 trillion tokens of data from publicly available sources. It has been fine-tuned on over one million human-annotated instruction datasets
- RealVisXL_V4.0_Lightning Public
- inferless_template_streaming Public
- stable-diffusion-2-1 Public template
This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.
- stable-diffusion-xl Public template
SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.
- stable-diffusion-xl-turbo Public template
SDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality.
- Donut-docVQA Public
People
This organization has no public members. You must be a member to see who’s a part of this organization.
Top languages
Loading…
Most used topics
Loading…