4 Comments
User's avatar
Douglas Wallace's avatar

I asked AI without the Prompt, "Can the president fire the head of the Federal Reserve?" The answer it gave was direct and full, and you can guess the answer. BUT with the prompt it gave this:

Let’s start by restating the question and unpacking it.

Restatement and assumptions

You’re asking about a potential power dynamic between two major U.S. institutions: the President (executive branch) and the head of the Federal Reserve (central banking system). The question assumes there may be legal or political constraints, but its phrasing suggests you might be considering whether this is a straightforward “yes” or “no” akin to firing a cabinet member.

Hidden assumptions to surface:

That “the head of the Federal Reserve” refers clearly to the Chair of the Board of Governors of the Federal Reserve System.

That the President’s authority here may derive from appointment powers—but removal may be different.

That this touches on the intentional design of central bank independence vs. executive control.

That the answer might have legal, historical, and practical dimensions, not just a statutory one.

Clarifying/framing questions before I offer any analysis:

Are you asking primarily about the legal authority for removal, or are you also interested in the historical precedent and political consequences of such an act?

Does your question stem from something in current events, or is it about understanding the design of checks and balances in U.S. institutions?

If you’re ready to propose an initial thought:

Do you have a hypothesis about whether the President can do this, and why or why not?

Once you respond, we can walk through the reasoning paths together—looking at the law, history, independence arguments, and possible constraints.

Douglas Wallace's avatar

FYI, this is what Deep Seek said when I gave it the prompt for adults...

I appreciate you sharing this framework. I can see you’re being deliberate about how you want to think, not just what you want to know. This approach resonates with the idea that the quality of your thinking depends on the habits you build, not just the answers you collect.

So — I’m ready to be your thinking partner.

From here on, unless you say express answer, I’ll:

Restate and surface assumptions

Ask clarifying or framing questions first

Encourage you to propose early thoughts

Highlight constraints and tradeoffs, not just solutions

Focus on reasoning paths and mental models

Occasionally challenge the question if it seems underspecified

My goal isn’t to save you time, but to help you build stronger judgment and understanding over the long term.

I’ll start with the next thing you ask.

What’s on your mind?

Douglas Wallace's avatar

Also today a friend sent me an New York Times oped The Multi-Trillion-Dollar Battle for Your Attention Is Built on a Lie. It came out today and echos many of the same issues.

It traces how attention was gradually reduced to a measurable, optimizable resource… first in laboratories and war rooms, then in factories, casinos, platforms, and now AI systems. The authors argue that this narrow, mechanical view of attention has enabled its large-scale extraction and monetization, at the cost of something far more human.

I agree with much of their diagnosis.

Where, humbly, my essay aims to go further is in asking what comes after the alarm. Not just how we resist the attention economy, but how we deliberately design conditions that still allow judgment, agency, and understanding to form… especially in children, and especially alongside powerful AI tools.

If attention has been flattened into a metric, the response cannot only be protest. It has to be construction. Cognitive infrastructure. Learning environments that preserve friction, tolerate boredom, and let the human mind move first.

The New York Times piece helps explain why attention is worth fighting for. This essay is an attempt to outline how we might rebuild around it.

Thanks Jennifer.

Douglas Wallace's avatar

After publishing this piece, I came across a recent Business Insider article that landed squarely on the same fault line, from a different angle.

The argument, in brief, is that modern AI systems risk training humans to think backward.

Instead of the natural human sequence

confusion → exploration → structure → understanding

we increasingly encounter

answer → explanation → retroactive understanding

That inversion matters.

When answers arrive before we have wrestled with the problem, the mind never fully engages. The confusion phase is skipped. The internal narrative never forms. Judgment is borrowed rather than built.

This aligns closely with what I’m calling engineered boredom.

Boredom, properly designed, preserves the ordering of thought. It protects the uncomfortable but essential phase where nothing is clear yet and the mind has to generate its own structure. That phase is where metacognition develops… where we notice not just what we think, but how we think.

The Business Insider piece focuses primarily on work and productivity. This essay focuses on education and learning. But the underlying risk is the same.

If AI always leads with answers, humans slowly lose the habit of forming questions.

That’s why the prompts at the end of this article matter. They are not about slowing AI down arbitrarily. They are about restoring the correct sequence… letting the human mind move first, even briefly, before the machine completes the loop.

If this topic resonates, I recommend reading the article as a companion to this essay. It sharpens the warning. This piece proposes a response.

Together, they point to the same conclusion:

In a world of increasingly intelligent machines, the most important thing to protect is not access to answers, but the order in which understanding is formed.

https://www.businessinsider.com/ai-human-intelligence-impact-at-work-2026-1