This project explores the legal and ethical challenges of assigning liability in artificial intelligence systems. It examines how responsibility is distributed when decisions are delegated to automated systems, with reference to ongoing policy discussions and regulatory developments in the European Union — particularly the evolving EU AI Act.
- Analyze how liability is conceptualized in the context of AI systems
- Reflect on legal gaps in accountability and responsibility
- Explore the role of human oversight in mitigating AI risk
- Connect ethical concerns with emerging policy frameworks
report.md
– detailed analysis covering key legal, ethical, and governance perspectivessummary.md
– brief overview of project context and main takeawaysreferences.md
– supporting sources from academic, legal, and policy domains
- Risk distribution and responsibility in AI
- Legal and regulatory ambiguity
- Human vs. automated agency
- Ethical implications of opacity and accountability in AI systems
Gabrijela M.
AI & Data Analyst | Streamlit · Chatbots · Responsible AI
LinkedIn
This project is licensed under the MIT License.