Skip to content

Legal analysis of liability challenges in AI under EU law (based on EC report)

License

Notifications You must be signed in to change notification settings

gabrijelam1/ai-liability-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Liability Analysis

This project explores the legal and ethical challenges of assigning liability in artificial intelligence systems. It examines how responsibility is distributed when decisions are delegated to automated systems, with reference to ongoing policy discussions and regulatory developments in the European Union — particularly the evolving EU AI Act.

Objectives

  • Analyze how liability is conceptualized in the context of AI systems
  • Reflect on legal gaps in accountability and responsibility
  • Explore the role of human oversight in mitigating AI risk
  • Connect ethical concerns with emerging policy frameworks

Structure

  • report.md – detailed analysis covering key legal, ethical, and governance perspectives
  • summary.md – brief overview of project context and main takeaways
  • references.md – supporting sources from academic, legal, and policy domains

Key Themes

  • Risk distribution and responsibility in AI
  • Legal and regulatory ambiguity
  • Human vs. automated agency
  • Ethical implications of opacity and accountability in AI systems

Author

Gabrijela M.
AI & Data Analyst | Streamlit · Chatbots · Responsible AI LinkedIn

License

This project is licensed under the MIT License.
MIT License

About

Legal analysis of liability challenges in AI under EU law (based on EC report)

Topics

Resources

License

Stars

Watchers

Forks