Dance Diffusion
译者:片刻小哥哥
项目地址:https://huggingface.apachecn.org/docs/diffusers/api/pipelines/dance_diffusion
原始地址:https://huggingface.co/docs/diffusers/api/pipelines/dance_diffusion
Dance Diffusion is by Zach Evans.
Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai .
The original codebase of this implementation can be found at Harmonai-org .
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
DanceDiffusionPipeline
class
diffusers.
DanceDiffusionPipeline
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L28)
(
unet
scheduler
)
Parameters
- unet
(
UNet1DModel
) —
A
UNet1DModel
to denoise the encoded audio. - scheduler
(
SchedulerMixin
) —
A scheduler to be used in combination with
unet
to denoise the encoded audio latents. Can be one of IPNDMScheduler .
Pipeline for audio generation.
This model inherits from DiffusionPipeline . Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
__call__
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py#L48)
(
batch_size
: int = 1
num_inference_steps
: int = 100
generator
: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
audio_length_in_s
: typing.Optional[float] = None
return_dict
: bool = True
)
→
export const metadata = 'undefined';
AudioPipelineOutput
or
tuple
Parameters
- batch_size
(
int
, optional , defaults to 1) — The number of audio samples to generate. - num_inference_steps
(
int
, optional , defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at the expense of slower inference. - generator
(
torch.Generator
, optional ) — Atorch.Generator
to make generation deterministic. - audio_length_in_s
(
float
, optional , defaults toself.unet.config.sample_size/self.unet.config.sample_rate
) — The length of the generated audio sample in seconds. - return_dict
(
bool
, optional , defaults toTrue
) — Whether or not to return a AudioPipelineOutput instead of a plain tuple.
Returns
export const metadata = 'undefined';
AudioPipelineOutput
or
tuple
export const metadata = 'undefined';
If
return_dict
is
True
,
AudioPipelineOutput
is returned, otherwise a
tuple
is
returned where the first element is a list with the generated audio.
The call function to the pipeline for generation.
Example:
from diffusers import DiffusionPipeline
from scipy.io.wavfile import write
model_id = "harmonai/maestro-150k"
pipe = DiffusionPipeline.from_pretrained(model_id)
pipe = pipe.to("cuda")
audios = pipe(audio_length_in_s=4.0).audios
# To save locally
for i, audio in enumerate(audios):
write(f"maestro\_test\_{i}.wav", pipe.unet.sample_rate, audio.transpose())
# To dislay in google colab
import IPython.display as ipd
for audio in audios:
display(ipd.Audio(audio, rate=pipe.unet.sample_rate))
AudioPipelineOutput
class
diffusers.
AudioPipelineOutput
[<
source
](https://github.com/huggingface/diffusers/blob/v0.23.0/src/diffusers/pipelines/pipeline_utils.py#L124)
(
audios
: ndarray
)
Parameters
- audios
(
np.ndarray
) — List of denoised audio samples of a NumPy array of shape(batch_size, num_channels, sample_rate)
.
Output class for audio pipelines.