A map of philosophical concepts essential for understanding AI. Click any concept to explore.
A belief counts as knowledge when it comes from a reliably truth-tracking process. In AI, this asks whether model generation mechanisms are stable across changing contexts. It reframes evaluation from output quality to process quality.
Epistemic luck appears when an answer is true by accident rather than robust understanding. AI systems can look accurate while succeeding for unstable reasons. This distinction matters for high-stakes deployment.
Much human knowledge comes from trusted testimony rather than direct verification. AI outputs now function like a new testimonial source. The core question is when machine testimony deserves credibility.
Epistemic injustice occurs when people are wronged as knowers. Biased datasets and ranking systems can systematically devalue certain voices. AI can scale this injustice in automated decision loops.
Epistemic risk names the gap between fluent output and justified reliability. A system can sound certain while lacking stable truth-tracking processes. This is central to Ziganshin's research agenda.
Kantian dignity treats persons as ends in themselves, never merely as means. Automated systems risk reducing people to optimization variables. This principle grounds non-negotiable ethical constraints.
The capabilities approach evaluates justice by what people are genuinely able to do and be. AI can either expand or constrict those real freedoms. It links technical design to lived human flourishing.
Virtue ethics focuses on character and practical wisdom rather than rule compliance alone. AI development requires cultivated judgment under uncertainty. It foregrounds habits of responsible technological practice.
Informed consent requires clear understanding and voluntary agreement. AI systems often operate through opaque defaults that undercut genuine consent. Philosophical analysis distinguishes formal click-through from meaningful choice.
Fairness in AI has many competing mathematical definitions. Choosing one is a normative decision, not a purely technical fact. Philosophy clarifies which fairness ideal fits a given social context.
Searle's Chinese Room argues that symbol manipulation is not understanding. Modern LLMs intensify this argument at scale. It remains a core test for claims about machine cognition.
The hard problem asks why physical processes are accompanied by subjective experience. AI capability growth does not automatically resolve this puzzle. Consciousness claims need philosophical and empirical caution.
Intentionality is the aboutness of mental states—their directedness toward objects or propositions. AI representations raise the question of whether they are genuinely about anything. This issue sits between mind and language.
Embodied cognition argues intelligence is rooted in bodily engagement with the world. Pure symbol processing may miss practical and perceptual dimensions of understanding. This challenges text-only intelligence claims.
Functionalism defines mental states by causal-functional role rather than physical substrate. It is often used to argue that machines could have minds. But functional equivalence remains difficult to establish in practice.
The grounding problem asks how symbols connect to what they represent. LLMs are strong at pattern completion but weak at world anchoring. This gap is central to language-model limitations.
Wittgenstein's view ties meaning to social use in language games. Meaning emerges from practices, norms, and forms of life. This raises whether LLMs participate in genuine norm-governed discourse.
Reference concerns how terms latch onto objects, kinds, and individuals in the world. Competing theories reveal that meaning is not just dictionary definition. AI must handle reference to avoid fluent misdescription.
Democratic oversight insists AI governance cannot be left only to firms or technical experts. Affected publics need participatory voice in rule-setting. Legitimacy depends on accountable institutions.
AI enables dense monitoring, prediction, and behavioral steering. Surveillance is not only data collection but a structure of power. Philosophical critique reveals how visibility and control become asymmetric.
Distributive justice asks who gains and who bears the costs of AI systems. Benefits and harms are often unevenly allocated. Fair deployment requires explicit principles for distribution and repair.