hmu.ai
Back to AI Dictionary
AI Dictionary

Transformer

Definition

A deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data.

Deep Dive

The Transformer is a revolutionary deep learning architecture, introduced in 2017, that has become the cornerstone for state-of-the-art models in natural language processing (NLP) and increasingly in computer vision. Its groundbreaking innovation lies in its complete reliance on the "self-attention mechanism," which allows the model to differentially weigh the importance of different parts of the input sequence when processing each element. This capability enables it to capture long-range dependencies in the data more effectively and efficiently than previous recurrent (RNNs) or convolutional (CNNs) architectures, which struggled with distant relationships.

Examples & Use Cases

  • 1Google's BERT model uses a Transformer encoder to understand the bidirectional context of words in a sentence for search queries.
  • 2OpenAI's GPT series (GPT-3, GPT-4) uses a Transformer decoder for generating human-like text and engaging in complex conversations.
  • 3Performing machine translation, where a Transformer encoder-decoder architecture translates text from one language to another with high fluency.

Related Terms

Self-AttentionBERTGPT

Part of the hmu.ai extensive business and technology library.