We worry about AI systems “going rogue” not because today’s models resemble HAL, but because the story of HAL taught us where unaccountable systems lead.
HAL wasn’t dangerous because it was intelligent. It was dangerous because it operated without transparency, oversight, or shared human judgment.
The lesson is not “fear the machine.”
The lesson is “fear any system — human or digital — that operates without accountability.”
