Skip to content
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.

Does autoLLM support local LLMs? #95

Answered by fcakyon
jonny7737 asked this question in Q&A
Discussion options

You must be logged in to vote

Yes @igoralvarezz it is currently supported. You can create an instance of HuggingFaceLLM and use it in the autollm pipeline as:

import torch

from llama_index.llms import HuggingFaceLLM
from autollm import AutoServiceContext, AutoVectorStoreIndex, AutoQueryEngine

llm = HuggingFaceLLM(
    context_window=4096,
    max_new_tokens=256,
    query_wrapper_prompt,
    tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
    model_name="StabilityAI/stablelm-tuned-alpha-3b",
    device_map="auto",
    stopping_ids=[50278, 50279, 50277, 1, 0],
)

service_context = AutoServiceContext.from_defaults(llm=llm)
vector_store_index = AutoVectorStoreIndex.from_defaults()
query_engine = AutoQueryEngine.f…

Replies: 6 comments 2 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
2 replies
@Dipeshpal
Comment options

@Dipeshpal
Comment options

Answer selected by fcakyon
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
5 participants
Converted from issue

This discussion was converted from issue #68 on November 03, 2023 12:43.