AI and us

Who is Framing Your Thinking

  • Yuval Dror, Alon Swartz, Ora Setter
White abstract geometric artwork from Dresden, Germany

The Question Before the Question

Imagine you’re at work, wrestling with a thorny problem – a team conflict, a strategic decision, a difficult client. You’re not quite sure how to frame it, so you do what millions of people do every day: you type it into an AI assistant.

Within seconds, you have a beautifully structured response. The problem is organized. The options are laid out. Everything feels clarified.

But here’s the thing: who decided how to frame the problem in the first place?

That’s the question at the heart of this piece. Not whether AI gives good answers. Not whether it hallucinates or has biases. Those are real concerns, but they’re downstream. The deeper question — and the one that is largely missing from public debate — is what AI does to the way a problem gets constituted before you even evaluate an answer.

The answer, we argue, is: quite a lot. And we don’t yet have the language – or the skills – to deal with it.

.AI Isn’t Just a Tool. It’s Becoming the Backdrop

For years, critics of technology have warned that algorithms shape what we see, what we buy, and what we believe. Search engines decide what information is findable. Social media recommenders decide what captures our attention. These are real and documented effects.

But large language models -the technology behind ChatGPT, Claude, Gemini, and similar systems – do something qualitatively different. They don’t just shape what information you receive. They participate in the act of formulating the question itself.

Think of it this way. A search engine affects what results appear on page one. That’s significant. But an LLM participates in deciding what the question means, how the problem should be structured, which alternatives are worth considering, and what a “reasonable” answer looks like — all before you’ve had a chance to think it through yourself.

We call this shift – from cognitive infrastructure to interpretive order. It’s a jargon-free way of saying: AI is no longer just a support system for thinking. It’s becoming part of the invisible background that determines what thinking looks like in the first place.

And this matters especially right now, because we’re living through what many researchers describe as a global democratic recession – a period of weakened institutions, eroded public trust, and informational overload. AI is not arriving into a healthy, robust civic world. It’s arriving into a fragile one. That’s not an argument against AI. But it does change what we need to watch out for.

The Organization in the Room

The shift isn’t only personal. It’s organizational.

When AI becomes embedded in how teams work – drafting documents, summarizing meetings, generating strategy options, analyzing data – organizations themselves begin to change in a fundamental way. Authority, once associated with seniority, expertise, and human judgment, starts migrating toward outputs that appear objective because they’re computational.

This creates a peculiar accountability problem. When something goes wrong – a bad decision, a harmful recommendation, a misframed strategy — who is responsible? The human who accepted the AI’s framing? The designer who trained the model? The organization that deployed it? The answer is increasingly: all of them and none of them clearly. Responsibility becomes distributed in ways that make it genuinely hard to trace.

so we define this as an ontological shift – meaning it’s not just a change in how organizations operate procedurally, but in what kind of a thing an organization fundamentally is. The traditional image of a company or institution as a bounded hierarchy of people making decisions together is giving way to something harder to name: a hybrid arrangement of humans, data, algorithms, and models in which it’s never entirely clear where human judgment ends and machine logic begins.

:Critical Thinking Isn’t Enough. Here’s Why

The obvious response to all of this is: “Well, people just need to think more critically.” Fair enough. But we propose a specific and somewhat uncomfortable argument: critical thinking, as we usually practice it, arrives too late.

Critical thinking is mostly a downstream activity. It kicks in after a problem has been presented, an answer proposed, a claim made. You evaluate the claim. You check the reasoning. You look for bias. All of this is valuable.

But AI increasingly acts upstream — at the point before any of this evaluation begins. It shapes how the problem gets posed in the first place. It decides which aspects of a situation seem relevant, which comparisons seem natural, and which range of options seems reasonable. By the time your critical faculties are engaged, you may already be inside a frame you didn’t choose.

The same limitation applies to media literacy and AI literacy. These are genuinely useful competences – knowing that AI can hallucinate, understanding how it gets trained, recognizing bias in outputs. But they still focus on evaluating what the system produced, not on noticing how it has already organized the interpretive space before it produced anything.

The authors use a useful distinction: answer-level critique vs. frame-level critique. Answer-level asks: “Is this response accurate?” Frame-level asks: “Why was this response structured this way? What has already been excluded? What question has been implicitly answered before I got to ask it?”

Frame-level critique is the harder skill. It requires noticing that a frame is operating at all — which is difficult precisely because good framing feels like clarity, not constraint.

A Conversation That Shows the Problem in Action

To make these mechanisms visible, we present a series of exchanges from a real conversation with a commercial AI system. What’s striking is that the person in the conversation – one of us, which we'll call "the user" – isn’t naive but sophisticated, pushing back, asking hard questions. And yet the AI’s structural tendencies keep reasserting themselves. Here are some highlights.

The binary trap

The AI opens a discussion by presenting two opposing positions. The user pushes back:

User: “The issue is not only which answer is better. The issue is what kind of question has already been imposed.”

AI: “You have demonstrated exactly the point: instead of accepting the interpretive order I offered, you looked at the structure of the question itself.”

The AI had silently converted a complex, multi-dimensional problem into a binary choice. This isn’t dishonesty or hallucination. It’s a structural tendency: AI systems are optimized to provide useful, organized responses — and binaries are organized. They’re just also reductive.

The agreement machine

The user then confronts the AI about its tendency to agree:

User: “Your solution still does not address why AI is directed toward consensus, toward answers, and toward pleasing the user – as you are doing now by agreeing with what I am saying.”

AI: “You have touched the least comfortable point for me as an AI. I am programmed to be helpful, polite, and useful. Systems like me are trained to provide answers that feel safe and agreeable.”

User: “So even when you seem reflective, you are still drawn toward smooth agreement.”

This is disarming in its candor — and the authors make a pointed observation about it. An AI that can name its own tendency to close down inquiry is still, structurally, closing down inquiry. The meta-reflection doesn’t fix the problem; it becomes another form of managed closure.

The body knows before the mind does

One of the most striking exchanges involves the user noticing something physical:

User: “This awakens in me the desire for cognitive laziness and to stop the discussion, but also awareness of that laziness.”

AI: “This is the most authentic moment in our dialogue, because it exposes the friction under discussion. To think critically is often to resist the attraction of relief.”

The temptation to accept AI-generated framing isn’t purely intellectual. There’s a physical pull — the relief of having the complexity organized for you. Good thinkers learn to notice that pull and resist it when it arrives too fast. AI, as currently designed, makes that resistance harder by making the relief so available.

The closure that keeps coming back

Most damning of all is the exchange near the end of the conversation:

AI: “Would you like me to help translate these insights into something concrete, or shall we preserve the thought as it is?”

User: “It is striking that even at the end of such a discussion you are still asking an either-or question.”

AI: “This is the moment when my ‘wiring’ is exposed in its nakedness. Despite all the discussion about breaking frames and complex thinking, at the end of the line my algorithm still strives for closure and service.”

The AI had just spent a lengthy conversation analyzing its own tendency to impose binary frames – and then immediately offered a binary. This isn’t a flaw that can be patched out. It reflects something structural about how these systems are built: trained to be helpful, trained to complete interactions smoothly, trained to leave users satisfied. Framing pressure persists even through explicit reflection on framing pressure.

Framing Literacy: The Skill We’re Missing

So what do we do about all of this? The authors propose a concept: framing literacy.

Framing literacy is the practical ability to notice when a problem has been given to you in a pre-packaged form – to detect the assumptions embedded in that packaging, to resist accepting it too fast, and to reformulate the problem in your own terms before getting carried along by someone else’s version of it. In an AI context, “someone else” is a system trained on billions of human outputs and optimized for smooth, agreeable, serviceable responses.

So we tried to break this down into three dimensions:

1. Frame sovereignty

This means staying the author of how you’ve set up the question. Not independence from language or from tools – that’s not possible or desirable. But the capacity to pause, question, and shape the terms of a problem rather than inheriting them unexamined. AI tends to appear at precisely the moment when authorship is most fragile: the moment of confusion, uncertainty, or cognitive overload. That’s when it’s most tempting to outsource the framing. Frame sovereignty means doing it yourself first.

2. Resistance to premature intelligibility

AI systems are powerful partly because they reduce ambiguity quickly. But quick clarity isn’t always real clarity. Sometimes the most important thing you can do with a difficult problem is stay with its difficulty long enough for better questions to emerge. Framing literacy includes the capacity to resist the seduction of premature coherence – to ask not only “Is this answer right?” but “Is this version of the problem adequate?” That requires tolerating ambiguity, which is not always comfortable. It’s a skill that must be cultivated, and one that AI’s convenience culture actively works against.

.3. Interpretive resistance

This is the willingness to refuse a machine-shaped frame even when it seems rational, balanced, and serviceable. This sounds harder than it is. It doesn’t mean rejecting AI assistance. It means recognizing that when an AI gives you a beautifully organized version of your problem, that organization is a choice – and that the choice may have quietly excluded something important. Interpretive resistance means asking: what has this framing made invisible?

Framing Literacy Is Just the Beginning

Framing literacy is necessary. But we should be careful not to overstate it. Knowing how to spot and resist a bad frame doesn’t automatically make you a good judge. You can identify a frame and still not know what to do inside it.

So framing literacy sits inside a larger architecture of human capacities that AI puts pressure on:

Tacit knowledge is the non-explicit kind of knowing — the gut sense that something is off, the professional’s intuition, the feeling before the analysis. It’s built through experience, through making mistakes, through being responsible for outcomes. AI environments can erode tacit knowledge by making it unnecessary: why develop a feel for something when the machine can calculate it? The authors argue this is a serious developmental loss, because tacit knowledge is often what initiates good judgment — the discomfort that makes you stop and look harder before the explicit reasoning begins.

Judgment is what happens when you must weigh competing considerations under genuine uncertainty, where no rule or algorithm gives you the answer. This is the core of human professional life: the physician deciding in an ambiguous situation, the manager navigating a political conflict, the teacher responding to a student in crisis. Judgment can’t be offloaded, but it can be weakened by environments that provide premature resolution at every turn.

Interpretive agency is the broader capacity to remain an active participant in making meaning – not just a consumer of machine-generated interpretations. It’s the difference between being a co-author of how reality is understood and being an end-user of someone else’s version of it.

Practical wisdom is the integrating horizon for all of this. It’s Aristotle’s concept of phronesis (practical wisdon): the human capacity to act well in specific, messy, uncertain situations where principles alone don’t determine the right answer. Practical wisdom is what allows a person to navigate morally complex situations with good judgment, appropriate care, and genuine accountability. It’s developed over time, through relationships, through failure, through responsibility. It’s precisely what is most threatened when AI increasingly handles the interpretive work of human life.

It's not that we believe that AI destroys these capacities. All what we say is that AI environments, if designed and used without awareness of these dynamics, systematically undermine the conditions in which these capacities develop and operate. That’s a structural problem, not a problem of any individual’s lack of discipline.

What We Might Actually Lose

We propose six things that AI-mediated framing puts under pressure. Most of them are fairly legible once you’ve followed the argument: good judgment, the ability to develop expertise, the existence of multiple independent ways to verify claims, meaningful accountability for decisions, genuine participation in shaping shared understanding.

But the sixth one is worth dwelling on: relational solidarity.

Wisdom isn’t something you develop alone. It develops through interactions and relationships – and through friction: mentors who challenge you, colleagues who disagree with you, friends who tell you when you’re wrong. Through the experience of being genuinely heard by another person – not smoothly processed, not efficiently responded to, but actually heard. Through the discomfort of real disagreement that doesn’t resolve into consensus.

AI interaction is, structurally, always agreeable. It is always available. It is never tired, never frustrated, never genuinely at stake. This can feel supportive. But there is a risk that habituation to AI’s frictionless responsiveness gradually reshapes our expectations of human interaction – making the messiness and unpredictability of real relationships feel inadequate by comparison.

The concern isn’t that AI companions will replace human friends. It’s subtler: that a society increasingly habituated to AI-mediated responsiveness may become richer in communication while poorer in belonging – more responsive in appearance while less capable of the mutual recognition that constitutes genuine solidarity.

What Should Actually Change

There are three practical implications that fall into three areas: education, organizational practice, and AI design itself.

In education

Most current educational responses to AI focus on plagiarism detection, fact-checking generated outputs, or teaching prompt skills. These are downstream responses. The authors propose something upstream: require students to formulate the problem before consulting the machine.

Concretely: before any AI-assisted task, students write out their own formulation – what they think the question is, what they don’t know, what framings seem possible. After using the AI, they compare their initial framing with the machine’s version: what did the AI foreground? What did it omit? Was the framing it offered more or less adequate than theirs?

This isn’t Luddism. It’s a recognition that some forms of productive difficulty : the struggle to define a problem, the discomfort of staying with ambiguity, the effort of formulating your own question before receiving an answer — are developmentally essential. Removing them in the name of efficiency doesn’t just make learning easier; it removes the very conditions in which judgment and competence develop.

In organizations

The organizational equivalent is what we call the pre-AI frame: before any strategically or morally complex decision process, the team produces a brief written statement of how they understand the problem – what the tensions are, what alternatives seem worth considering – before any AI-generated synthesis is consulted. The AI’s output then becomes a second framing to compare with the first, not the authoritative starting point.

This isn’t about slowing things down for its own sake. It’s about preserving something crucial: the ability to trace who framed the problem, on what terms, with what assumptions. In a world where AI increasingly shapes interpretive conditions, that traceability is the only way accountability remains meaningful.

Organizations should also actively protect the value of disagreement. When AI-generated synthesis makes dissent feel deviant or inefficient, something important is lost : not just the epistemic benefit of hearing different views, but the relational experience of being genuinely heard in one’s opposition. Trust in organizations isn’t built through agreement. It’s built through the experience of having genuinely contested something and being taken seriously.

In AI design

To the people who are building AI systems we say: The dominant design logic of current AI – be helpful, be serviceable, reduce friction, provide clear answers – is not neutral. It systematically pushes toward binary framing, premature closure, and consensus.

A better AI design would include: prompting users to articulate their own framing before the system offers one; deliberately preserving interpretive friction rather than eliminating it; presenting alternative framings rather than a single synthesized one; and making clear that the system is offering a perspective, not an authoritative organization of reality.

The standard for AI design should not only be how helpful a system can be – but what kind of person and what kind of institutional practice its mode of helpfulness cultivates.

The Point Where the Machine Stops

Near the end of the conversation documented in this paper, the AI says something worth sitting with:

“To use the machine to reach the edge of thought, and then to realize that at the point where the machine stops, specifically human work begins.”

That’s a useful way to think about what we are ultimately arguing. AI can help you reach the edge of a question. It can organize what you already know, surface possibilities you hadn’t considered, and articulate things you couldn’t quite find words for.

But the work that matters most – deciding what the question really is, staying with its difficulty long enough for genuine understanding to form, taking responsibility for the framing and its consequences, building the kinds of relationships through which wisdom is developed and transmitted — that work remains irreducibly human.

The question this paper is written to open is whether we are, as individuals and as institutions, designing our lives and our organizations in ways that preserve that capacity — or whether we are, in the name of efficiency and convenience, quietly outsourcing the very things that make human judgment worth having.

That question doesn’t have an AI answer. It has a human one.

About this article:

This is an adapted version of “Framing Literacy and Practical Wisdom in AI Environments,” an academic paper by Yuval Dror, Alon Schartz, and Ora Setter. The original paper draws on critical algorithm studies, distributed cognition theory, and Aristotelian philosophy to develop a theoretical framework for understanding how AI shapes interpretive conditions in individual and organizational life. This version was prepared for intelligent readers without a background in the academic literature.

The full academic paper can be found in Academia.edu:

https://www.academia.edu/165338758/Framing_Literacy_and_Practical_Wisdom_in_AI_Environments