About 8,590,000 results
Open links in new tab
  1. How to download a model from huggingface? - Stack Overflow

    May 19, 2021 · To download models from 🤗Hugging Face, you can use the official CLI tool huggingface-cli or the Python method snapshot_download from the huggingface_hub library.

  2. Loading Hugging face model is taking too much memory

    Mar 13, 2023 · Loading Hugging face model is taking too much memory Asked 2 years, 9 months ago Modified 2 years, 9 months ago Viewed 16k times

  3. python - Efficiently using Hugging Face transformers pipelines on …

    Sep 22, 2023 · I'm relatively new to Python and facing some performance issues while using Hugging Face Transformers for sentiment analysis on a relatively large dataset. I've created a …

  4. python - Fixing Hugging Face Login Issue (504 Gateway Timeout …

    Feb 22, 2025 · Fixing Hugging Face Login Issue (504 Gateway Timeout & Invalid Token Error) Asked 10 months ago Modified 10 months ago Viewed 938 times

  5. Hugging Face Pipeline behind Proxies - Windows Server OS

    Mar 3, 2022 · I am trying to use the Hugging face pipeline behind proxies. Consider the following line of code from transformers import pipeline sentimentAnalysis_pipeline = pipeline …

  6. How to get all hugging face models list using python?

    Mar 22, 2023 · Is there any way to get list of models available on Hugging Face? E.g. for Automatic Speech Recognition (ASR).

  7. How to do Tokenizer Batch processing? - HuggingFace

    Jun 7, 2023 · For that you won't face much OOM issues. If you need to use a GPU, consider using the pipeline(...) inference and it comes with the batch_size option, e.g. from transformers …

  8. python - ImportError in Hugging Face Integration ...

    Dec 19, 2024 · I am working on an AI project with Llama Index and the transformers library, integrating Hugging Face models. Below is my code snippet: from llama_index.core import …

  9. HuggingFace Inference Endpoints extremely slow performance

    Aug 10, 2023 · I'm using an AMD Ryzen 5 5000, so might or might not be significantly slower than the Intel Xeon Ice Lake CPUs Hugging Face provide (they don't really tell you the model and …

  10. HuggingFace: Loading checkpoint shards taking too long

    Feb 1, 2024 · where I have cached a hugging face model using cache_dir within the from_pretraind () method. However, everytime I load the model it requires to load the …