Enabling PyTorch on XLA Devices (e.g. Google TPU)
-
Updated
Mar 15, 2025 - Python
Enabling PyTorch on XLA Devices (e.g. Google TPU)
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
ALBERT model Pretraining and Fine Tuning using TF2.0
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
PyTorch distributed training acceleration framework
Tensorflow2 training code with jit compiling on multi-GPU.
Fast and easy distributed model training examples.
Provides code to serialize the different models involved in Stable Diffusion as SavedModels and to compile them with XLA.
Versatile Data Ingestion Pipelines for Jax
Access Xspec models and corresponding JAX/XLA ops.
Classification of multilingual dataset trained only on English training data using pre-trained models. Model is trained on TPUs using PyTorch and torch_xla library.
deep learning inference perf analysis
As the quality of large language models increases, so do our expectations of what they can do. Since the release of OpenAI's GPT-2, text generation capabilities have received attention. And for good reason - these models can be used for summarization, translation, and even real-time learning in some language tasks.
A practical method for training energy-based language models.
Dataloading for JAX
Modern Graph TensorFlow implementation of Super-Resolution GAN
A Multivariate Gaussian Bayes classifier written using JAX
Add a description, image, and links to the xla topic page so that developers can more easily learn about it.
To associate your repository with the xla topic, visit your repo's landing page and select "manage topics."