Skip to content
traceremove
AI Philosophy Research

What machines mean,
what they risk,
what we owe.

Independent philosophical research on the epistemic foundations, ethical architecture, and social implications of artificial intelligence.

Artur Ziganshin · Master of Philosophy · PhD of Philosophy

7

Research Papers

5

Essays Published

3

Research Areas

Open Access

Research Areas

Three interconnected lines of inquiry into the philosophical foundations of AI.

Epistemology

Epistemic Risks

How do AI systems generate persuasive but weakly grounded claims? I develop frameworks for auditing epistemic reliability, drawing on process reliabilism and virtue epistemology.

Explore
Ethics

Ethical Architecture

Design principles for embedding normative constraints at the model, interface, and institutional levels — so ethics is structural, not decorative.

Explore
Social Philosophy

Human Dignity

A framework grounded in Kantian ethics and capabilities theory for preserving agency, respect, and contestability in AI-mediated decisions.

Explore

Recent Papers

View all
Preprint2025

Epistemic Risk Surfaces in Large Language Models

This paper develops a granular taxonomy of epistemic failure in large language models, distinguishing between confident error, synthetic coherence, and context-sensitive reliability collapse. I argue that benchmark performance cannot substitute for process-level justification and propose an audit architecture grounded in process reliabilism and virtue epistemology.

epistemologyLLMsepistemic risk
Preprint2025

Linguistic Symbolism and Meaning Compression in Machine Learning

By analyzing how symbolic structures are compressed during representation learning, this preprint examines the gap between linguistic fluency and semantic grounding. I show why lexical competence in model outputs can mask referential fragility and propose criteria for distinguishing symbolic simulation from meaningful reference.

meaningsymbolismlanguage models
Preprint2025

Human Dignity Constraints for Autonomous Decision Systems

This paper argues that dignity-preserving design requires more than fairness metrics. Drawing on Kantian ethics and capabilities theory, I outline institutional and interface-level constraints that preserve contestability, recognition, and agency in automated welfare, labor, and healthcare decisions.

human dignityKantautomated decisions

The Epistemic Mirror

Weekly philosophical analysis of AI developments.
No hype. No jargon. Just clarity.

Free · Unsubscribe anytime

I investigate the philosophical foundations of artificial intelligence — focusing on what AI systems know, how they fail, and what we owe to the people affected by their decisions.

My work sits at the intersection of epistemology, ethics, and philosophy of language. I publish on PhilArchive and write weekly analysis for a growing community of readers.

Read full CV