wandb.init().
Manage multiprocess training using these approaches:
- Call
wandb.initin all processes and use the group keyword argument to create a shared group. Each process will have its own wandb run, and the UI will group the training processes together. - Call
wandb.initfrom only one process and pass data to log through multiprocessing queues.
Refer to the Distributed Training Guide for detailed explanations of these approaches, including code examples with Torch DDP.
Experiments