Exhibit 023 of 43 han (한)

The Algorithm

Filed 2026-02-17 Re: ai, bias, system

The algorithm denied her claim. Not a person. The algorithm.

The insurance company uses a model trained on historical claims data. The historical data reflects decades of decisions about who deserves care and who doesn’t. The biases aren’t bugs — they’re the training data.

Nobody overrode the algorithm. Not because they couldn’t. Because overriding it requires a person to take responsibility, and the whole point of the algorithm is that nobody has to. “The system flagged it.” “The model determined it wasn’t medically necessary.” Passive voice. No subject. No one decided. It just happened.

She appealed. The appeal is reviewed by a person who looks at the algorithm’s output and almost always agrees. Not because the person is lazy. Because disagreeing means documenting why, and the documentation goes to someone who wants to know why the human disagreed with the model the company paid $30 million to build. The incentive is airtight: agree with the machine and go home on time, or disagree and write a report that questions the investment.

The algorithm is in everything now. The loan application. The job screening. The bail decision. The insurance claim. Each one processed by a model trained on data that reflects every bias the institution has ever practiced, deployed at a speed that makes review impossible, and defended with language that sounds like objectivity.

“The algorithm is unbiased.” The algorithm processes numbers. Numbers don’t have opinions. This is like saying a gun doesn’t have intent. Technically correct. Operationally meaningless. The algorithm encodes the intent of the data it was trained on.

She can’t argue with the algorithm. There’s no one to argue with. It’s a decision without a decider, a judgment without a judge, a no that comes from everywhere and nowhere.

Hancock.