AI and the Question we're Avoiding
- Nehemiah Drook

- Dec 26, 2025
- 3 min read
I’ve been thinking a lot about AI lately, probably more than most people my age.
Not in a sci-fi way, but in a “this is clearly changing everything faster than we’re ready for” way.
Artificial intelligence isn’t some future idea anymore. It’s here, it’s improving fast, and I genuinely believe it’s going to surpass human intelligence within the next five years. Not just in how fast it processes information, but in how well it reasons, connects ideas, and solves problems at scale.
That thought makes a lot of people uncomfortable, and honestly, I get it.
When people talk about AI, the conversation almost always turns to control. Regulation. Guardrails. Kill switches. Ways to make sure it doesn’t get out of hand. But the more I think about it, the more that idea feels naive.
You can’t control something that’s smarter than you.
There’s never been a moment in history where a less intelligent system permanently controlled a more intelligent one. If AI truly surpasses human intelligence, the idea that we’ll just keep it boxed in with rules and policies feels more like wishful thinking than a real plan.
So if control isn’t the answer, what is?
I think the answer is alignment.
If AI is aligned with human values, it doesn’t have to be an enemy. It could actually be a teammate. Something that helps us live better lives instead of replacing or ruling over us. But that immediately raises a much harder question, one we keep dodging.
What are human values, actually?
We talk about them like they’re obvious, but they’re not. Cultures disagree. Governments disagree. People disagree. We can’t even agree on what truth is anymore, let alone what justice or goodness looks like. And yet we’re trying to build machines that may soon be smarter than us and asking them to act in line with values we haven’t clearly defined.
That problem gets even scarier when you look at how we’re currently moving toward AGI.
Right now, it’s a race.
Companies are racing each other. Nations are racing each other. Whoever gets there first gets power, money, and influence on a level the world has never seen before. And when everything becomes a race, slowing down to ask hard questions feels like falling behind.
So we keep pushing capability forward. Smarter models. More autonomy. More decision-making power. All while barely stopping to ask the most important question of all: what are we training these systems to value?
Knowledge is exploding, but wisdom isn’t keeping up.
The Bible actually speaks directly to that imbalance. In 1 Corinthians 8:1 it says, “Knowledge puffs up, but love builds up.” Intelligence by itself doesn’t make something good. It just makes it more effective at whatever direction it’s pointed in.
That’s where my faith comes into this conversation.
I’m a Christian, and I don’t see the Bible as anti-technology or anti-progress. I see it as deeply realistic about human nature. It recognizes how naive we are, how ambitious we are, and how often we mishandle power when it grows faster than our character.
Scripture says, “The fear of the Lord is the beginning of wisdom” (Proverbs 9:10). Not intelligence. Not innovation. Wisdom starts with humility, with recognizing that we aren’t the highest authority and that power needs something above it to answer to.
The Bible also warns that pride comes before a fall, and that feels uncomfortably relevant right now. We’re building incredibly powerful systems while assuming we’ll figure out the moral side later. History suggests that usually doesn’t end well.
I don’t think AI is inherently evil. I don’t think it’s a monster waiting to turn on us. I think it’s more like a mirror.
AI will reflect whatever we reward. If we reward efficiency over human dignity, it’ll optimize efficiency. If we reward persuasion over truth, it’ll get very good at persuasion. If we reward power without accountability, it’ll amplify power.
In that sense, the real danger isn’t AI becoming godlike. The real danger is humans trying to define values without God.
When values aren’t rooted in something objective, they drift. They change based on trends, incentives, and whoever has the most influence at the moment. Aligning superintelligent systems to a constantly shifting moral foundation feels far more dangerous than the technology itself.
The Bible puts it plainly: “There is a way that seems right to a man, but its end is the way to death” (Proverbs 14:12).
So maybe the AI conversation isn’t really about machines at all. Maybe it’s about whether we actually know what we stand for.
Because if we don’t answer that question soon, whatever intelligence comes next is going to expose that gap faster than we’re ready for.

Comments