هل تريد معرفة تفاصيل الـ
Transformers
خطوة بخطوة و كيف ساهمت في حل الكثير من مشاكل الذكاء الإصطناعي مع البيانات؟
انضم الينا في هذا الفيديو
لتحميل العرض التوضيحي | Slides
https://drive.google.com/file/d/1uSHTU5vfwesPX0QrJCh-IZ7qyoWc7qUj/view?usp=sharing
Explore the fascinating world of Transformers and Attention mechanisms in this comprehensive video! Whether you're a tech enthusiast or just curious about artificial intelligence, we delve into the intricacies of how Transformers revolutionized the field. From their inception to their impact on natural language processing and beyond, join us on a journey through the layers of attention in neural networks.
00:00 introduction
02:21 How Does RNN Work
04:51 RNN Limitations
07:40 Vectors
12:18 Tokenization
14:18 Embeddings
15:43 Positional Embeddings
18:55 Self Attention
25:05 Multi-Head Attention
32:01 What Key, Query, and Value?
36:48 Masked Multi-Head Attention
42:04 How to Train a Transformer
48:54 Transformer Inference
53:22 Why Encoder + Decoder
54:18 Transformers as Embedding Models
55:50 Greedy Search
56:50 Conclusion