Number of the records: 1
Generative deep learning
Title statement Generative deep learning : teaching machines to paint, write, compose, and play / David Foster ; foreword by Karl Friston Personal name Foster, David (author) Edition statement Second edition Publication Beijing ; Boston ; Farnham ; Sebastopol ; Tokyo : O'Reilly, 2023 Phys.des. xxvi, 426 stran : ilustrace ISBN 978-1-0981-3418-1 (brožováno) Internal Bibliographies/Indexes Note Obsahuje bibliografie a rejstřík Another responsib. Friston, K. J. (Karl J.), 1959- (author of introduction) Subj. Headings programování programming * neuronové sítě (počítačová věda) neural networks (computer science) * učící se systémy learning systems * hluboké učení deep learning * strojové učení machine learning Form, Genre příručky handbooks and manuals Conspect 004.8 - Umělá inteligence UDC 004.42 , 004.8.032.26 , 004.85 , 004.852 , (035) Country Čína ; Spojené státy americké ; Velká Británie ; Japonsko Language angličtina Document kind Books Call number Barcode Location Sublocation Info M2/1880 (PřF) 3134054699 PřF PřF, KMA – RNDr. Vodák In-Library Use Only
"Generative modeling is one of the hottest topics in AI. It’s now possible to teach a machine to excel at human endeavors such as painting, writing, and composing music. With this practical book, machine-learning engineers and data scientists will discover how to re-create some of the most impressive examples of generative deep learning models, such as variational autoencoders,generative adversarial networks (GANs), encoder-decoder models, and world models. Author David Foster demonstrates the inner workings of each technique, starting with the basics of deep learning before advancing to some of the most cutting-edge algorithms in the field. Through tips and tricks, you’ll understand how to make your models learn more efficiently and become more creative. Discover how variational autoencoders can change facial expressions in photos ; Build practical GAN examples from scratch, including CycleGAN for style transfer and MuseGAN for music generation ; Create recurrent generative models for text generation and learn how to improve the models using attention ; Understand how generative models can help agents to accomplish tasks within a reinforcement learning setting ; Explore the architecture of the Transformer (BERT, GPT-2) and image generation models such as ProGAN and StyleGAN."--Nakladatelská anotace
Number of the records: 1