Skip to content

Using Google Colab to carry out very fast distributed training on a TPU cluster

License

Notifications You must be signed in to change notification settings

Doometnick/Distributed-TPU-Training

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Distributed-TPU-Training

Google Colab offers options to use GPUs and TPUs as hardware accelerators.
While this can already significantly increase the training speed, we can increase the speed even further by distributing trainig over a pool of these accelerators.

This is an illustration of how to define a distribution strategy and train a model in a distributed way.