Knowledge Base

Explore 5 core concepts in AI/ML research.

Transformer
Architecture

A deep learning model architecture relying on self-attention mechanisms.

Definition

The Transformer architecture processes input sequences in parallel using self-attention, allowing it to capture long-range dependencies more effectively than RNNs. It consists of encoder and decoder stacks, each containing multi-head attention and feed-forward layers.

Related Concepts

Self-AttentionPositional EncodingMulti-Head Attention

Key Papers

Attention Is All You NeedBERT

Examples: GPT-4, Claude

RLHF
Method

Reinforcement Learning from Human Feedback, used to align LLMs with human preferences.

Definition

RLHF trains a reward model on human preference data, then fine-tunes the language model using PPO to maximize the reward. This alignment technique helps reduce harmful outputs and improve helpfulness.

Related Concepts

PPOReward ModelAlignment

Key Papers

InstructGPTConstitutional AI

Examples: ChatGPT alignment, Claude training

BLEU Score
Metric

A metric for evaluating the quality of machine translated text.

Definition

BLEU (Bilingual Evaluation Understudy) compares n-gram overlaps between generated and reference translations. Scores range from 0 to 1, with higher scores indicating better translation quality.

Related Concepts

ROUGEMETEORBERTScore

Key Papers

BLEU: a Method for Automatic Evaluation

Examples: MT evaluation, Summarization scoring

Diffusion Models
Method

Generative models that learn to reverse a gradual noising process.

Definition

Diffusion models add Gaussian noise to data over multiple steps, then learn to reverse this process. They achieve state-of-the-art image generation by iteratively denoising random noise into coherent samples.

Related Concepts

DenoisingScore MatchingLatent Diffusion

Key Papers

DDPMStable Diffusion

Examples: Midjourney, Stable Diffusion XL

ImageNet
Dataset

Large-scale visual database for object recognition research.

Definition

ImageNet contains over 14 million images annotated with 20,000+ categories. The ILSVRC subset (1000 classes) became the standard benchmark for image classification, driving major advances in CNNs.

Related Concepts

Transfer LearningFine-tuningPretraining

Key Papers

ImageNet Classification with Deep CNNs

Examples: ResNet-50 on ImageNet, ViT benchmarks