-
Notifications
You must be signed in to change notification settings - Fork 24
Issues: lmstudio-ai/mlx-engine
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Failed to load the model (The requested number of bits 3 is not supported.)
beta-available
fixed-in-next-release
The next release of LM Studio fixes this issue
#57
opened Dec 8, 2024 by
certik
Replace reimplementation of VLM prepare_inputs with mlx_vlm.utils.prepare_inputs
enhancement
New feature or request
good first issue
Good for newcomers
#50
opened Nov 27, 2024 by
neilmehta24
Add tests for multi-image VLM prompts, and for followup prompts
enhancement
New feature or request
good first issue
Good for newcomers
#49
opened Nov 27, 2024 by
neilmehta24
Tokens returned with a GenerationResult are off when compared to text
bug
Something isn't working
#42
opened Nov 22, 2024 by
mattjcly
Set wired limit before starting generation
enhancement
New feature or request
fixed-in-next-release
The next release of LM Studio fixes this issue
#40
opened Nov 22, 2024 by
neilmehta24
Add logprobs to generation result
enhancement
New feature or request
#37
opened Nov 15, 2024 by
neilmehta24
Can only search for models from mlx-community
fixed-in-next-release
The next release of LM Studio fixes this issue
#35
opened Nov 11, 2024 by
YorkieDev
Failed to Index Model error mlx-community/Mamba-Codestral-7B-v0.1-8bit
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#33
opened Nov 8, 2024 by
YorkieDev
Add KV cache quantization feature
enhancement
New feature or request
#31
opened Nov 8, 2024 by
neilmehta24
Phi 3.5 Vision Instruct Fails to Load "Trust remote code" error
enhancement
New feature or request
#29
opened Nov 6, 2024 by
YorkieDev
Pixtral 12B context size cannot be configured beyond 2048 in LM Studio
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#28
opened Nov 6, 2024 by
neilmehta24
Repeated Generation Regression with Qwen2-VL-7B-Instruct-4bit and default LM Studio generation config
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#27
opened Nov 4, 2024 by
mattjcly
LM Studio(0.3.5) load mllama model failed.
fixed-in-next-release
The next release of LM Studio fixes this issue
#25
opened Nov 1, 2024 by
Aaronthecowboy
Refactor CacheWrapper._find_common_prefix to use MLX instead of numpy
good first issue
Good for newcomers
#24
opened Oct 31, 2024 by
neilmehta24
Qwen2-VL-7B Giving Error on 0.3.5
fixed-in-next-release
The next release of LM Studio fixes this issue
#17
opened Oct 22, 2024 by
ThakurRajAnand
ministral 8b downloaded but unable to load
fixed-in-next-release
The next release of LM Studio fixes this issue
#13
opened Oct 17, 2024 by
bhupesh-sf
Feature Request: Add support for Pixtral and other Vision models (llama 3.2 11b/90b etc)
fixed-in-next-release
The next release of LM Studio fixes this issue
#5
opened Oct 8, 2024 by
YorkieDev
ProTip!
Exclude everything labeled
bug
with -label:bug.