Welcome to AI Ethics
Enter your information to begin Module 1. Your name personalizes your experience and appears on your completion certificate.
What is AI?
Before you can think ethically about artificial intelligence, you need to understand what it is, how it works, and what it isn't.
Click each stage to see how a spam filter processes an email:
By the end of this module, you'll be able to explain what makes something AI versus regular software, identify where AI can go wrong and why, and evaluate whether AI-generated information is trustworthy enough to use.
Defining AI — The Three-Part Test
Here's the simplest useful definition: a machine has artificial intelligence if it can interpret data, learn from that data, and use what it learned to adapt and achieve specific goals.
Three things. If a system can't do all three, it's not AI — it's software doing what somebody told it to do.
Input 2+2. Output 4. Again? Still 4. A million times — always 4. Same input, same output, forever.
Same email format, different result each time. Learns from your clicks. Catches spam it's never seen. Gets better every week.
You've been using AI every day — spam filters, recommendation engines, voice assistants. AI isn't new. What's new is that you can now talk to it.
Sort: AI or Not AI?
How AI Gets Built
Every ethical problem with AI traces back to decisions made during construction. Understanding the build process is understanding where things go wrong.
1. Data Collection — Where Bias Begins
Every AI starts with data. Train a medical AI mostly on data from one demographic group, and it performs worse on everyone else. This has happened with dermatology tools, cardiac risk models, and facial recognition.
2. Data Preparation — Hidden Assumptions
Raw data needs cleaning. Humans decide what's "clean" — those decisions carry assumptions baked into the final system.
3. Model Selection — The Black Box Starts Here
More complex models learn more patterns but become harder to explain. The more powerful the AI, the less anyone can say why it made a specific decision.
4. Training — The Billion-Dollar Phase
Training a frontier model costs tens of millions. Only a handful of companies can build the most powerful systems. Concentration of power is itself an ethical issue.
5. Evaluation — Does It Actually Work?
Testing reveals whether the model learned genuine patterns or memorized the training set. Big difference between passing a test and understanding the subject.
6. Deployment — When Mistakes Become Real
A flawed system denies loans, misdiagnoses patients, or flags innocent students for cheating. Feedback loops can improve the model or amplify its mistakes.
How ChatGPT, Claude, and Gemini Actually Work
Large language models predict what word comes next. That's it. They're prediction engines, not thinking machines.
See how an LLM picks the next word by probability:
"The capital of France is ___"
If AI generates text that sounds like an expert, how do you tell the difference between real expertise and pattern matching that mimics it?
Open-Source vs. Closed-Source AI
Not all AI is built or distributed the same way. This has direct consequences for privacy, accountability, and verification.
Examples: Meta's Llama, Mistral, Stability AI
→ Code is public — anyone can inspect, modify, run it
→ Run on your own hardware — data never leaves your machine
→ Researchers can audit for bias — transparency is built in
→ Risk: no built-in guardrails for misuse
Examples: OpenAI GPT-4, Anthropic Claude, Google Gemini
→ Accessible through company's interface only
→ Your prompts go to company servers
→ You trust the company's safety claims
→ Company applies safety filters
Privacy: Open = data stays local. Closed = data goes to servers.
Audit: Open = anyone can inspect. Closed = trust the company.
Safety: Open = no guardrails. Closed = corporate filters.
Access: Open = technical setup. Closed = easy web interface.
If your university picks a closed-source AI tool, every essay a student pastes into it goes to company servers. Open-source keeps data on campus — but someone has to maintain it. That trade-off shapes institutional AI decisions everywhere.
AI Hallucinations — Confident and Wrong
AI regularly generates information that sounds authoritative and is completely fabricated. This isn't a bug — it's structural.
Two AI-generated citations. One real, one fabricated. They look identical. Click "Verify" on each:
If you use AI for a research paper and it generates a fake citation, you submit false references. Academic integrity policies hold you responsible. "The AI told me" is not a defense.
What Would You Do?
What do you do?
AI Interaction Lab
Interact with AI and analyze its behavior. Ask questions you know the answer to. Ask for citations. Watch how confident it sounds regardless of accuracy.
AI-related questions only — won't write essays or discuss off-topic subjects.
Quick Recap — What You Learned
Before the assessment, here's everything in one place. Tap any card to review.
The Three-Part Test
AI must interpret data, learn from it, and adapt. All three. Missing one = just software.
AI vs. Not AI
Calculators, alarm clocks, fixed traffic lights aren't AI. Spam filters, autocorrect, recommendations are.
The 6-Step Pipeline
Collect → Prepare → Choose → Train → Evaluate → Deploy. Bias enters at every stage.
LLMs Are Prediction Engines
They predict the next word, not the truth. "Convincing" ≠ "correct."
Open vs. Closed Source
Open = transparent, local. Closed = convenient, data goes to servers.
Hallucinations Are Structural
AI fabricates with identical confidence to real info. You can't tell by looking — only by checking.
Your Responsibility
The tool's mistakes are your mistakes. Verify everything. Check your institution's AI policy.
Your Assessment
5 randomly selected questions. Need 80% (4 of 5) to pass. Each attempt draws different questions.