Price: $21.83
(as of Sep 25, 2024 22:54:38 UTC – Details)
Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more. Stripped of anything else, neural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming.
This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland. I overview the basics of optimizing a function via automatic differentiation, and a selection of the most common designs for handling sequences, graphs, texts, and audios. The focus is on a intuitive, self-contained introduction to the most important design techniques, including convolutional, attentional, and recurrent blocks, hoping to bridge the gap between theory and code (PyTorch and JAX) and leaving the reader capable of understanding some of the most advanced models out there, such as large language models (LLMs) and multimodal architectures.
The book is supplemented by a companion website where I will publish additional chapters and coding exercises (https://www.sscardapane.it/alice-book). The book is self-published to keep the price as low as possible, feedback on possible imprecisions is welcomed and rewarded by a (much Italian) coffee!
About the author: Simone Scardapane is a researcher at Sapienza University of Rome, where he teaches neural networks and machine learning. In his free time, he (endlessly) talks about machine learning at the intersection of the no-profit, academic, and industrial worlds.
Table of contents:
Chapter 1: Foreword and introductionChapter 2: Mathematical preliminariesChapter 3: Datasets and lossesChapter 4: Linear modelsChapter 5: Fully-connected layersChapter 6: Automatic differentiationChapter 7: Convolutional layersChapter 8: Convolutions beyond imagesChapter 9: Scaling up the modelsChapter 10: Transformer modelsChapter 11: Transformers in practiceChapter 12: Graph layersChapter 13: Recurrent layersAppendix A: Probability theoryAppendix B: Universal approximation in 1D
ASIN : B0D9QHS5NG
Publisher : Independently published (July 16, 2024)
Language : English
Paperback : 378 pages
ISBN-13 : 979-8332166181
Item Weight : 1.41 pounds
Dimensions : 6 x 0.86 x 9 inches