AI, Ethics, and Responsibility
Introduction
Section titled “Introduction”🎯 Learning goals
- Understand why AI raises ethical questions
- Know the risks of AI: privacy, a surveillance society, impact on the labor market
- Know how power and control over AI systems affect society
- Be able to ask critical questions about AI use
AI is not just technology — it’s a societal issue. The technology affects who gets jobs, who gets loans, who is monitored, what information you see, and who ends up in prison. When AI makes or influences decisions about people’s lives, we must ask questions about fairness, accountability, and power.
AI systems collect and analyze enormous amounts of data about you — often without you being fully aware of it or having approved how it’s used.
Issue 1: Privacy and surveillance
AI builds profiles of you based on your searches, what you watch, where you are, who you talk to, what you buy, your voice, and your face.
Why is this a problem?
Section titled “Why is this a problem?”You don’t know what they know about you
Section titled “You don’t know what they know about you”Companies may know more about your preferences, political views, health, and finances than your closest friends.
Data can be used in ways you haven’t approved
Section titled “Data can be used in ways you haven’t approved”You may have given Facebook permission to use your images to “improve the user experience” — but did you know this could mean training face recognition systems sold to governments?
The surveillance society
Section titled “The surveillance society”Some countries use AI for mass surveillance. In China, face recognition is used in public places and a “social credit system” evaluates citizens’ behavior — low scores can lead to restrictions such as bans on buying plane tickets. Elsewhere, AI is used to analyze social media and identify protesters.
Data breaches and misuse
Section titled “Data breaches and misuse”If a company holds a lot of data about you and that data is leaked, hackers, criminals, or hostile states could gain access to sensitive information.
What can you do?
Section titled “What can you do?”- Be aware of which apps and services you allow to collect data
- Use privacy settings on social media
- Support legislation that protects privacy (such as the EU’s GDPR)
AI can automate and amplify society’s injustices at a scale impossible for human decision-makers.
Issue 2: Bias and discrimination
Real-world examples
Section titled “Real-world examples”- Amazon recruitment: AI trained on 10 years of hiring (mostly men in tech roles) penalized women’s CVs — for example, giving “women’s chess club” negative points. The system was scrapped.
- Criminal justice (US): AI for predicting the risk of re-offending overestimated the risk for Black recidivists compared to white ones — leading to unjust sentences and parole decisions.
- Face recognition: Systems are significantly worse at recognizing the faces of people with darker skin and women — with the risk of wrongful arrests.
- Loan applications: AI can discriminate against people with lower education, minorities, or people from certain areas, affecting the ability to buy a home or start a business.
Why does this happen?
Section titled “Why does this happen?”- Historical data reflects historical injustices: If data shows that women have historically received lower salaries, AI learns that women “are worth less”
- Proxy variables: AI cannot directly discriminate based on race (illegal), but learns correlations — zip codes can be a proxy for ethnicity
- Self-reinforcing bias: A flawed system creates new skewed data that amplifies the problem
What’s the solution?
Section titled “What’s the solution?”- Review training data and test AI on different groups
- Demand transparency from companies and government agencies
- Ensure human oversight — AI should not make decisions alone in sensitive matters
If a human makes an incorrect decision, we can hold that person accountable. But when AI makes a mistake — who is responsible?
Issue 3: Accountability – Who bears responsibility when AI goes wrong?
Self-driving car
Section titled “Self-driving car”A self-driving car kills a pedestrian. Who is responsible?
- The manufacturer of the car?
- The company that built the AI?
- The programmer who wrote the code?
- The person sitting in the car (but not driving)?
Medical AI
Section titled “Medical AI”An AI recommends the wrong treatment and the patient dies. Who is responsible — the AI company, the hospital that used the AI, or the doctor who trusted the assessment?
The problem
Section titled “The problem”AI systems are so complex that it’s difficult to point to ONE cause of a mistake. No one consciously made a bad decision — the system “learned” the behavior from data.
Legislation is ongoing regarding accountability for AI systems, but the question is not yet resolved.
AI and automation are fundamentally changing the labor market — and the question is not whether it happens, but how we manage the transition.
Issue 4: Impact on the labor market
Jobs being affected
Section titled “Jobs being affected”- Routine tasks are being automated: Factory work, warehouse management, data entry
- Some white-collar roles: Basic customer service, booking, cashier work
- Even ‘knowledge work’ is affected: Simple legal documents, accounting, data analysis
Estimates vary, but many forecasts suggest that 10–30% of today’s jobs will be significantly affected by AI within 10–20 years.
But new jobs are also being created
Section titled “But new jobs are also being created”- AI training and data management
- AI ethics and oversight
- People working alongside AI
- Creative professions (AI cannot yet replace real creativity and emotional intelligence)
Who wins and who loses?
Section titled “Who wins and who loses?”- Highly educated people who can work with AI will likely become more productive and better paid
- Less educated people with routine jobs risk unemployment
This risks increasing social inequality.
What can society do?
Section titled “What can society do?”- Retraining and lifelong learning
- Social safety nets for those who lose their jobs
- Discussion of basic income — if AI dramatically increases productivity, how is the wealth distributed?
AI is being developed by a handful of actors, creating a concentration of power with far-reaching consequences for economics, politics, and global security.
Issue 5: Power and control – Who owns AI?
Who controls AI development?
Section titled “Who controls AI development?”Companies: Google, Microsoft, Meta, Amazon, Apple (US) — Baidu, Alibaba, Tencent (China)
This gives them enormous power:
- They decide which AI systems are built
- They own the data that trains AI
- They set standards for how AI is used
The risks
Section titled “The risks”- Monopoly on knowledge and technology
- AI is developed for companies’ profit, not society’s benefit
- Countries without their own AI capacity become dependent on others
Geopolitics
Section titled “Geopolitics”AI is a strategic resource — like oil or weapons. The US and China are competing to dominate AI development, which affects everything from economics to military power.
The question: Should AI be controlled by private companies, states — or be a global common resource?
Military AI and autonomous weapons systems raise profound ethical questions that the international community has not yet answered.
Issue 6: AI weapons and autonomous systems
Military AI today
Section titled “Military AI today”- Drones that independently identify and attack targets
- AI that analyzes intelligence and suggests strategies
- Cyberwarfare with AI
The question: Should AI be allowed to make decisions about life and death?
Section titled “The question: Should AI be allowed to make decisions about life and death?”Many researchers and activists are pushing for a ban on autonomous lethal weapons — weapons that can kill without human control.
The problems
Section titled “The problems”- An AI cannot make moral judgments
- Risk of escalation — AI reacts to AI in milliseconds, without human reflection
- The accountability question: Who bears responsibility for AI’s decisions in war?
Generative AI can create false information that is almost impossible to distinguish from real — with serious consequences for democracy and trust.
Issue 7: Disinformation and deepfakes
What are deepfakes?
Section titled “What are deepfakes?”AI-generated videos, images, or audio in which people say or do things they never did — impossible to distinguish from genuine material.
Consequences
Section titled “Consequences”- Political influence: Deepfake videos of politicians “saying” controversial things
- Fake news: AI writes credible but fabricated news articles
- False evidence: AI-generated images spread as “proof” of events that never happened
- Personal harm: Non-consensual deepfake pornography
If everything can be fake — how do we know what to trust?
Defense
Section titled “Defense”- AI that detects deepfakes (but it’s an arms race)
- Media literacy and source verification
- Legislation against distributing deepfakes
As an individual and citizen, you have more power than you might think — both in how you use AI and in how you participate in the societal debate that shapes its future.
What can YOU do?
Be critical
Section titled “Be critical”- Question AI decisions, especially in important matters
- Ask how the AI was trained and what data it uses
- Demand transparency
Protect your privacy
Section titled “Protect your privacy”- Be aware of what data you share
- Use privacy settings
- Support legislation that protects you
Keep learning
Section titled “Keep learning”- Understand how AI works (this course is a start!)
- Stay updated on developments and risks
Join the debate
Section titled “Join the debate”- AI affects society — your voice counts
- Support ethical AI development
- Demand that politicians and companies take responsibility
Key takeaways
Section titled “Key takeaways”Here we gather the most important insights from this section before you move on to the quiz.
- Privacy: AI systems collect enormous amounts of data about you — often without you knowing or having approved how it’s used
- Bias: AI can automate and amplify society’s injustices in recruitment, credit assessment, and criminal justice
- Accountability: It is legally and ethically unclear who bears responsibility when an AI system makes a wrong decision
- Labor market: AI risks widening social inequality if the transition is not managed responsibly
- Power: AI development is controlled by a handful of companies and countries, with geopolitical consequences
- Deepfakes and disinformation pose serious threats to democracy and trust in information
Test your knowledge
4 questions · 100% correct to pass · Review your answers when done