【布客】huggingface 中文翻译
Processors
正在初始化搜索引擎
apachecn/huggingface-doc-zh
【布客】huggingface 中文翻译
apachecn/huggingface-doc-zh
huggingface 中文文档
peft
peft
Get started
Get started
🤗 PEFT
Quicktour
Installation
Tutorial
Tutorial
Configurations and models
Integrations
PEFT method guides
PEFT method guides
Prompt-based methods
LoRA methods
IA3
Developer guides
Developer guides
Model merging
Quantization
LoRA
Custom models
Adapter injection
Mixed adapter types
Contribute to PEFT
Troubleshooting
🤗 Accelerate integrations
🤗 Accelerate integrations
DeepSpeed
Fully Sharded Data Parallel
Conceptual guides
Conceptual guides
Adapters
Soft prompts
IA3
API reference
diffusers
diffusers
Get started
Get started
🧨 Diffusers
Quicktour
Effective and efficient diffusion
Installation
Tutorials
Tutorials
Overview
Understanding pipelines, models and schedulers
AutoPipeline
Train a diffusion model
Load LoRAs for inference
Accelerate inference of text-to-image diffusion models
Using Diffusers
Using Diffusers
Loading & Hub
Loading & Hub
Overview
Load pipelines, models, and schedulers
Load and compare different schedulers
Load community pipelines and components
Load safetensors
Load different Stable Diffusion formats
Load adapters
Push files to the Hub
Tasks
Tasks
Overview
Unconditional image generation
Text-to-image
Image-to-image
Inpainting
Text or image-to-video
Depth-to-image
Techniques
Techniques
Textual inversion
IP-Adapter
Merge LoRAs
Distributed inference with multiple GPUs
Improve image quality with deterministic generation
Control image brightness
Prompt weighting
Improve generation quality with FreeU
Specific pipeline examples
Specific pipeline examples
Overview
Stable Diffusion XL
SDXL Turbo
Kandinsky
ControlNet
Shap-E
DiffEdit
Distilled Stable Diffusion inference
Pipeline callbacks
Create reproducible pipelines
Community pipelines
Contribute a community pipeline
Latent Consistency Model-LoRA
Latent Consistency Model
Trajectory Consistency Distillation-LoRA
Stable Video Diffusion
Training
Training
Overview
Create a dataset for training
Adapt a model to a new task
Models
Models
Unconditional image generation
Text-to-image
Stable Diffusion XL
Kandinsky 2.2
Wuerstchen
ControlNet
T2I-Adapters
InstructPix2Pix
Methods
Methods
Textual Inversion
DreamBooth
LoRA
Custom Diffusion
Latent Consistency Distillation
Reinforcement learning training with DDPO
Taking Diffusers Beyond Images
Taking Diffusers Beyond Images
Other Modalities
Optimization
Optimization
Overview
General optimizations
General optimizations
Speed up inference
Reduce memory usage
PyTorch 2.0
xFormers
Token merging
DeepCache
Optimized model types
Optimized model types
JAX/Flax
ONNX
OpenVINO
Core ML
Optimized hardware
Optimized hardware
Metal Performance Shaders (MPS)
Habana Gaudi
Conceptual Guides
Conceptual Guides
Philosophy
Controlled generation
How to contribute?
Diffusers' Ethical Guidelines
Evaluating Diffusion Models
API
transformers
transformers
Get started
Get started
🤗 Transformers
Quick tour
Installation
Tutorials
Tutorials
Run inference with pipelines
Write portable code with AutoClass
Preprocess data
Fine-tune a pretrained model
Train with a script
Set up distributed training with 🤗 Accelerate
Load and train adapters with 🤗 PEFT
Share your model
Agents
Generation with LLMs
Task Guides(需要自己整理结构)
Task Guides(需要自己整理结构)
Natural language processing
Audio
Computer vision
Multimodal
Generation
Multpromptingimodal
Developer guides
Developer guides
Use fast tokenizers from 🤗 Tokenizers
Run inference with multilingual models
Use model-specific APIs
Share a custom model
Templates for chat models
Trainer
Run training on Amazon SageMaker
Export to ONNX
Export to TFLite
Export to TorchScript
Benchmarks
Notebooks with examples
Community resources
Custom Tools and Prompts
Troubleshoot
Contribute new quantization method
Performance and scalability
Performance and scalability
Overview
Quantization
Efficient training techniques
Efficient training techniques
Methods and tools for efficient training on a single GPU
Multiple GPUs and parallelism
Fully Sharded Data Parallel
DeepSpeed
Efficient training on CPU
Distributed CPU training
Training on TPU with TensorFlow
PyTorch training on Apple silicon
Custom hardware for training
Hyperparameter Search using Trainer API
Optimizing inference
Optimizing inference
CPU inference
GPU inference
Instantiate a big model
Debugging
XLA Integration for TensorFlow Models
Optimize inference using `torch.compile()`
Contribute
Contribute
How to contribute to 🤗 Transformers?
How to add a model to 🤗 Transformers?
How to convert a 🤗 Transformers model to TensorFlow?
How to add a pipeline to 🤗 Transformers?
Testing
Checks on a Pull Request
Conceptual guides
Conceptual guides
Philosophy
Glossary
What 🤗 Transformers can do
How 🤗 Transformers solve tasks
The Transformer model family
Summary of the tokenizers
Attention mechanisms
Padding and truncation
BERTology
Perplexity of fixed-length models
Pipelines for webserver inference
Model training anatomy
Getting the most out of LLMs
API
tokenizers
tokenizers
Getting started
Getting started
🤗 Tokenizers
Quicktour
Installation
The tokenization pipeline
Components
Training from memory
API
贡献指南
关于我们
加入我们
中文资源合集
Processors
我们一直在努力
apachecn/AiLearning
为正常使用来必力评论功能请激活JavaScript
回到顶部