Skip to content
AI Philosophy Research

What machines mean,
what they risk,
what we owe.

Independent philosophical research on the epistemic foundations, ethical architecture, and social implications of artificial intelligence.

Artur Ziganshin · Master of Philosophy · PhD of Philosophy

Research Areas

Epistemic Risks

How do AI systems generate persuasive but weakly grounded claims? Drawing on process reliabilism and virtue epistemology, I develop frameworks for auditing epistemic reliability before deployment.

Explore →

Ethical Architecture

Design principles for embedding normative constraints at model, interface, and institutional levels — so ethics is foundational, not an afterthought.

Explore →

Human Dignity

A framework grounded in Kantian ethics and capabilities theory for preserving agency, respect, and contestability in AI-mediated decisions.

Explore →

Recent Papers

View all
Preprint

Epistemic Risk Surfaces in Large Language Models

This paper develops a granular taxonomy of epistemic failure in large language models, distinguishing between confident error, synthetic coherence, and context-sensitive reliability collapse. I argue that benchmark performance cannot substitute for process-level justification and propose an audit architecture grounded in process reliabilism and virtue epistemology.

epistemologyLLMsepistemic risk
Read on PhilArchive →
Preprint

Linguistic Symbolism and Meaning Compression in Machine Learning

By analyzing how symbolic structures are compressed during representation learning, this preprint examines the gap between linguistic fluency and semantic grounding. I show why lexical competence in model outputs can mask referential fragility and propose criteria for distinguishing symbolic simulation from meaningful reference.

meaningsymbolismlanguage models
Read on PhilArchive →
Preprint

Human Dignity Constraints for Autonomous Decision Systems

This paper argues that dignity-preserving design requires more than fairness metrics. Drawing on Kantian ethics and capabilities theory, I outline institutional and interface-level constraints that preserve contestability, recognition, and agency in automated welfare, labor, and healthcare decisions.

human dignityKantautomated decisions
Read on PhilArchive →

The Epistemic Mirror

Weekly philosophical analysis of AI developments. No hype, no jargon — just clarity.

Free · Unsubscribe anytime · You'll be redirected to our Substack page.

I investigate the philosophical foundations of artificial intelligence — focusing on what AI systems know, how they fail, and what we owe to the people affected by their decisions.

My work sits at the intersection of epistemology, ethics, and philosophy of language. I publish research on PhilArchive and write weekly philosophical analysis for a growing community of readers.

Read full CV →