Select Paradox
Derivation Display
Resolution Strategy
Derivation Controls
Example Paradoxes
Current Paradox: Russell's Paradox
The set of all sets that do not contain themselves leads to contradiction.
Explore Russell's paradox, the liar sentence, and Curry's paradox—see formal derivations and resolution strategies
Current Paradox: Russell's Paradox
The set of all sets that do not contain themselves leads to contradiction.
Logical paradoxes are self-referential statements or constructions that lead to contradictions within a given logical system. These paradoxes have driven major developments in logic, set theory, and the foundations of mathematics throughout the 20th century.
Discovered by Bertrand Russell in 1901, this paradox exposed a fatal flaw in naive set theory. Consider the set R = {x | x ∉ x}, the set of all sets that do not contain themselves. The question "Is R ∈ R?" leads to contradiction:
This paradox motivated the development of axiomatic set theory (ZFC) with careful restrictions on set formation, including the axiom of separation which prevents unrestricted comprehension.
One of the oldest paradoxes, dating to ancient Greece. Consider the sentence "This sentence is false." If we try to assign it a truth value:
The strengthened liar "This sentence is not true" avoids truth-value gaps but still generates paradox. This challenges our naive understanding of truth predicates.
A more subtle paradox that works even in systems without negation. Consider the sentence C: "If C is true, then P" for any arbitrary proposition P. We can derive P (including absurdity) as follows:
This proves any proposition from self-reference alone, showing that unrestricted conditionals with self-reference lead to triviality (explosion).
Consider "the smallest positive integer not definable in fewer than twenty words." This phrase defines a number in fewer than twenty words, but if it succeeds in defining a number, that number must not be definable in fewer than twenty words—contradiction.
Berry's paradox highlights issues with definability predicates and suggests that "definable" cannot be defined within the language itself without leading to paradox.
Russell's own solution was type theory: sets can only contain elements of lower type. The set {x | x ∉ x} is ill-formed because x ranges over all types. This prevents self-reference but at the cost of a complex type hierarchy.
Modern set theory (ZFC) uses the axiom of separation: you can only form {x ∈ A | φ(x)} for some existing set A, not {x | φ(x)} for all x. This blocks Russell's paradox while maintaining a simpler framework.
Alfred Tarski showed that truth predicates must be hierarchical. The truth predicate for language L cannot be defined in L itself, but only in a metalanguage L'. Each level can only talk about truth at lower levels, preventing the liar paradox.
This captures natural language usage where "that's true" refers to some other statement, but challenges our intuition that we have a single unified notion of truth.
Reject the principle of explosion (ex contradictione quodlibet): from a contradiction, anything follows. In paraconsistent logics, we can tolerate some contradictions without triviality. The liar sentence can be both true and false without the entire system collapsing.
Dialethism goes further, accepting true contradictions as genuine. Graham Priest has argued that liar sentences are genuinely both true and false.
Saul Kripke developed a partial truth predicate using fixed-point constructions. Some sentences (like the liar) remain undefined in the minimal fixed point, neither true nor false. This captures intuitions about groundedness: truth is grounded in non-semantic facts.
Kripke's construction builds up truth values iteratively, starting from sentences with no semantic predicates, then adding sentences whose truth depends only on already-evaluated sentences, reaching a fixed point where some sentences remain gappy.
Many paradoxes share a common structure: diagonal arguments or self-reference. Cantor's diagonal argument (proving uncountability of reals), Gödel's incompleteness theorem, and the halting problem all use similar diagonalization techniques.
The pattern: assume you can enumerate or define all objects of type X, then construct a new object by differing from each object in the enumeration at one position. This new object cannot be in the enumeration, contradicting completeness.
The discovery of these paradoxes in the late 19th and early 20th centuries created a foundational crisis in mathematics. Russell, Hilbert, Gödel, Tarski, and others developed new logical frameworks in response. The resolution efforts led to modern mathematical logic, set theory, and the philosophy of language as we know them today.