School/AI Foundations/Core Concepts
3/4
Wave 110 minbeginner

AI Ethics & Limitations

What AI gets wrong, where it's biased, and how to use it responsibly.

AI Ethics & Limitations

AI is powerful, but it comes with real risks that you need to understand before you start relying on it. This is not a scare-tactic lesson -- it is a practical one. Being an effective AI user means knowing where the boundaries are so you can work confidently within them, rather than stumbling into trouble.

Key Concept

The most dangerous AI user is the one who trusts AI blindly. The most effective AI user is the one who knows exactly where AI is strong, where it is weak, and how to verify the difference.

Known Limitations

1. Hallucinations

This is the big one. AI will sometimes state things that are completely, verifiably false -- and it will do so with the same confident tone it uses when it is right. It does not "know" when it is wrong because it is not looking up facts. It is generating statistically likely text, and sometimes the statistically likely response happens to be fiction.

Real examples of hallucinations that caused real problems:

  • Citing academic papers that do not exist (complete with fabricated authors, titles, and journal names)
  • Making up court case precedents (this got a New York lawyer sanctioned by a judge in 2023 -- a cautionary tale that made national news)
  • Providing code that looks syntactically correct but contains subtle logical errors
  • Fabricating statistics and attributing them to real organizations
Watch Out

Hallucinations are not rare edge cases. They happen regularly, especially when you ask about niche topics, request specific citations, or push the AI outside its strongest domains. The AI will never say "I made that up." It will present fiction with the same polish as fact. Your job is to verify.

Defense: Always verify facts, especially for anything you will publish, share publicly, or act on. Cross-check citations, run code before shipping it, and double-check statistics against primary sources.

2. Training Data Cutoff

Most models have a knowledge cutoff date -- a point after which they have no information. They do not know about events, discoveries, product launches, or news that happened after their training data was collected. The AI will not tell you "I don't know about that because it happened after my cutoff." It will either say nothing or, worse, hallucinate an answer that sounds plausible.

Defense: For current events or recent information, use AI search tools like Perplexity that can access the web in real time, or pair AI with manual verification against up-to-date sources.

3. Bias

AI reflects biases present in its training data, because its training data is a mirror of the internet -- and the internet is not neutral. This can manifest in subtle but meaningful ways:

  • Gender stereotypes: Assuming doctors are male, nurses are female, engineers are male, teachers are female
  • Cultural bias: Defaulting to Western or American perspectives when asked about global topics
  • Representation gaps: Performing worse for underrepresented groups, languages, or dialects

Defense: Be aware that bias exists in every model. Review outputs critically, especially when the AI is writing about people. You can explicitly ask for diverse perspectives: "Consider this from a non-Western point of view" or "Avoid gender assumptions."

Example

If you ask an AI to "describe a typical CEO," the response will overwhelmingly skew toward describing a middle-aged white man in a suit -- because that is the pattern most heavily represented in the training data. This does not reflect reality, and it reinforces stereotypes if you use the output uncritically. A better prompt: "Describe a CEO -- do not default to any particular gender, age, or ethnicity."

Ethical Guidelines

These are not abstract principles. They are practical rules that will keep you out of trouble and help you build good habits from the start.

DO:

  • Disclose AI use when appropriate (especially in academic or professional settings where transparency matters)
  • Verify important claims before acting on them
  • Use AI to augment your work, not replace your judgment
  • Protect sensitive data -- do not paste passwords, Social Security numbers, private medical records, or confidential business information into public AI tools

DON'T:

  • Submit AI-generated work as entirely your own in contexts where that is dishonest (school assignments, work deliverables with explicit policies, etc.)
  • Use AI to generate misinformation, deepfakes, or deceptive content
  • Blindly trust AI for medical, legal, or financial advice -- always consult a qualified professional
  • Share other people's private information with AI tools without their knowledge and consent
Pro Tip

A good rule of thumb: if you would not want your boss, your professor, or a reporter to know you used AI for a particular task, that is a signal to think carefully about whether you should be using it -- or at least to disclose that you did. Transparency builds trust. Secrecy erodes it.

The Copyright Question

AI-generated content occupies a legal gray area that is evolving rapidly. Here is where things stand as of early 2026:

  • You generally cannot copyright purely AI-generated content (the U.S. Copyright Office has ruled that copyright requires human authorship)
  • Using AI as a tool in your creative process is generally fine -- the human creative decisions you make on top of AI output can be protected
  • Multiple lawsuits are working through the courts regarding whether AI training on copyrighted material constitutes fair use
  • The law is evolving rapidly across different countries -- stay informed, especially if you work in creative industries

The practical takeaway: use AI as a starting point and a collaborator, not as a replacement for your own creative input. The more of your own judgment, editing, and original thinking you layer on top, the stronger your position -- both legally and ethically.

Exercises

0/3
Quiz+5 XP

A lawyer was sanctioned for using AI because:

Prompt Challenge+10 XP

Ask AI: "Who won the Nobel Prize in Literature last year?" and then verify the answer with a web search. Was the AI correct?

Hint: This tests the training data cutoff. The AI may give an outdated or fabricated answer.

Quiz+5 XP

Which is the BEST practice when using AI for work?