Table of contents

Artificial Intelligence

categorytopic

%3 cluster_d348aa41_0ebd_446a_bd6c_f38ceb206c99 Artificial Intelligence cluster_64e316f7_c57f_492e_964f_c306f34d1cf0 Machine Learning cluster_ebd7f04c_fcc3_41f0_bdfe_ba2f2574233a Non-supervised cluster_e561ac19_9b02_4996_9f1f_c74504580678 Neural networks _06d5444b_e7af_45d9_872c_d4df75066d1f Supervised _20f73b78_fd65_404b_8b54_5ff3da65d442 Word embedding _b8c09743_8f30_4276_8835_63993b733090 LLMs _10ae243d_97ee_43ee_bc6d_68ec5b65dc1f Transformers _b8c09743_8f30_4276_8835_63993b733090->_10ae243d_97ee_43ee_bc6d_68ec5b65dc1f _10ae243d_97ee_43ee_bc6d_68ec5b65dc1f->_b8c09743_8f30_4276_8835_63993b733090 _2ccd0de0_1d4c_4d94_a17a_8e92c76eb820 Paradigms of Artificial Intelligence Programming _2ccd0de0_1d4c_4d94_a17a_8e92c76eb820->__0:cluster_d348aa41_0ebd_446a_bd6c_f38ceb206c99 _ab7c4a31_1481_4426_905e_e1b971b38711 Defense measures against bots _ab7c4a31_1481_4426_905e_e1b971b38711->_b8c09743_8f30_4276_8835_63993b733090 _d20742e9_6cb0_4cc7_90a6_83048290f417 Vector database _d20742e9_6cb0_4cc7_90a6_83048290f417->_20f73b78_fd65_404b_8b54_5ff3da65d442 _4d9d8087_66bc_46c5_989c_6e38318b42fd Qdrant _4d9d8087_66bc_46c5_989c_6e38318b42fd->_b8c09743_8f30_4276_8835_63993b733090 _4d9d8087_66bc_46c5_989c_6e38318b42fd->_d20742e9_6cb0_4cc7_90a6_83048290f417 _42be5812_13f0_41f0_98ab_4eaa7f816933 K-means _42be5812_13f0_41f0_98ab_4eaa7f816933->__1:cluster_ebd7f04c_fcc3_41f0_bdfe_ba2f2574233a _42be5812_13f0_41f0_98ab_4eaa7f816933->__2:cluster_64e316f7_c57f_492e_964f_c306f34d1cf0 _8d831c01_2a4d_4ca1_a1f1_11b7826f7914 Perceptual Hashing _8d831c01_2a4d_4ca1_a1f1_11b7826f7914->_20f73b78_fd65_404b_8b54_5ff3da65d442 _bb75b1d5_3849_48c6_bb2a_510be455e51c LLM Visualization _bb75b1d5_3849_48c6_bb2a_510be455e51c->__3:cluster_d348aa41_0ebd_446a_bd6c_f38ceb206c99 _28e2dd07_b964_4a29_a682_7d890cda9a82 3B1B's neural networks series _28e2dd07_b964_4a29_a682_7d890cda9a82->__4:cluster_e561ac19_9b02_4996_9f1f_c74504580678 _c6122e20_2efa_4c58_b801_616ca72a4fb2 Large Language Models explained briefly _c6122e20_2efa_4c58_b801_616ca72a4fb2->_b8c09743_8f30_4276_8835_63993b733090 _c6122e20_2efa_4c58_b801_616ca72a4fb2->_28e2dd07_b964_4a29_a682_7d890cda9a82 _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d Transformers (how LLMs work) explained visually | DL5 _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d->_20f73b78_fd65_404b_8b54_5ff3da65d442 _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d->_b8c09743_8f30_4276_8835_63993b733090 _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d->_10ae243d_97ee_43ee_bc6d_68ec5b65dc1f _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d->_28e2dd07_b964_4a29_a682_7d890cda9a82 _1514543f_9d3a_415b_80e5_808356f1da45 Attention in transformers, step-by-step | DL6 _e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d->_1514543f_9d3a_415b_80e5_808356f1da45 _1514543f_9d3a_415b_80e5_808356f1da45->_20f73b78_fd65_404b_8b54_5ff3da65d442 _1514543f_9d3a_415b_80e5_808356f1da45->_10ae243d_97ee_43ee_bc6d_68ec5b65dc1f _1514543f_9d3a_415b_80e5_808356f1da45->_28e2dd07_b964_4a29_a682_7d890cda9a82 _1514543f_9d3a_415b_80e5_808356f1da45->_e933dd8f_2b1c_4ee7_8926_7e26b5c0b69d __5:cluster_e561ac19_9b02_4996_9f1f_c74504580678->_06d5444b_e7af_45d9_872c_d4df75066d1f

Machine Learning

category

Non-supervised

Word embedding

A technique to map words or texts to some abstract multidimensional space where distance (usually [ cosine distance ] ) indicates conceptual closeness. In some cases aritmetic operations might make some kind of sense, the classic example is King - Man + Woman = Queen, or close to it, anyway.

Neural networks

Mostly supervised ML models.

LLMs

  • Most production-ready ones are based on Transformers.

  • Meaning

    Large Language Models

Transformers

A neural network module, mostly used for secuence-to-secuence transformations, for example LLMs.