Skip to content

Popular repositories

  1. Smaug-72B Smaug-72B Public

    Smaug-72B - which topped the Hugging Face LLM leaderboard and it’s the first model with an average score of 80, making it the world’s best open-source foundation model.

    Python 15 4

  2. stable-video-diffusion stable-video-diffusion Public template

    (SVD) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 576x1024 given a contex…

    Python 6 4

  3. TensorRT-LLM TensorRT-LLM Public

    5

  4. bark bark Public template

    Bark is a transformer-based text-to-audio model created by Suno. Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound …

    Python 4 10

  5. RMBG-1.4 RMBG-1.4 Public template

    RMBG v1.4 is our state-of-the-art background removal model, designed to effectively separate foreground from background in a range of categories and image types.

    Python 4 4

  6. dolphin-2.5-mixtral-8x7b-GPTQ dolphin-2.5-mixtral-8x7b-GPTQ Public

    This Dolphin is really good at coding, it has been trained with a lot of coding data. It is very obedient but it is not DPO tuned - so you still might need to encourage it in the system prompt.

    Python 2 3

Repositories

Showing 10 of 116 repositories
  • Python 0 1 0 0 Updated Jun 13, 2024
  • Llama-2-13b-hf Public

    Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.

    Python 0 3 0 0 Updated Jun 13, 2024
  • Llama-2-7b-chat Public template

    Llama 2 7B Chat is the smallest chat model in the Llama 2 family of large language models developed by Meta AI. This model has 7 billion parameters and was pretrained on 2 trillion tokens of data from publicly available sources. It has been fine-tuned on over one million human-annotated instruction datasets

    Python 0 1 0 0 Updated Jun 13, 2024
  • Python 0 3 0 0 Updated Jun 12, 2024
  • Python 0 1 0 0 Updated Jun 10, 2024
  • stable-diffusion-2-1 Public template

    This stable-diffusion-2-1 model is fine-tuned from stable-diffusion-2 (768-v-ema.ckpt) with an additional 55k steps on the same dataset (with punsafe=0.1), and then fine-tuned for another 155k extra steps with punsafe=0.98.

    Python 0 6 0 0 Updated Jun 8, 2024
  • stable-diffusion-xl Public template

    SDXL consists of an ensemble of experts pipeline for latent diffusion: In a first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/) specialized for the final denoising steps.

    Python 0 9 0 0 Updated Jun 8, 2024
  • stable-diffusion-xl-turbo Public template

    SDXL-Turbo is a distilled version of SDXL 1.0, trained for real-time synthesis. SDXL-Turbo is based on a novel training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling large-scale foundational image diffusion models in 1 to 4 steps at high image quality.

    Python 1 8 0 0 Updated Jun 8, 2024
  • Python 0 1 0 0 Updated May 31, 2024
  • Flan-UL2 Public
    Python 0 0 0 0 Updated May 31, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…