< Terug naar vorige pagina

Publicatie

Dynamic automatic differentiation of GPU broadcast kernels

Boekbijdrage - Boekhoofdstuk Conferentiebijdrage

We show how forward-mode automatic differentiation (AD) can be employed within larger reverse-mode computations to dynamically differentiate broadcast operations in a GPU-friendly manner. Our technique fully exploits the broadcast Jacobian's inherent sparsity structure, and unlike a pure reverse-mode approach, this "mixed-mode" approach does not require a backwards pass over the broadcasted operation's subgraph, obviating the need for several reverse-mode-specific programmability restrictions on user-authored broadcast operations. Most notably, this approach allows broadcast fusion in primal code despite the presence of data-dependent control flow. We discuss an experiment in which a Julia implementation of our technique outperformed pure reverse-mode TensorFlow and Julia implementations for differentiating through broadcast operations within an HM-LSTM cell update calculation.
Boek: 2018 Conference on Neural Information Processing Systems : proceedings
Aantal pagina's: 1
Jaar van publicatie:2018
Toegankelijkheid:Open