The AI you are using today is the worst AI you will ever use. And it is only the beginning. But the technology is not the bottleneck. We are. Over 70 million adults in the U.S. cannot reliably understand what they read. And before you think “that is not me,” thirty years of point-and-click and social media have quietly eroded how all of us communicate. This issue is about three human skills that determine whether AI helps or misleads you. None of them is technical.
π§ Go Deeper: This week’s companion 15-minute briefing episode digs into the conversation that started it all. Apple Podcasts | Spotify
π AI Is an Amplifier
Think of a microphone. It does not make a bad singer good. It makes a good singer louder, and a bad singer louder too. AI works the same way.
Strong reading skills? AI makes you faster and sharper. Weak reading skills? AI makes you confidently wrong, at scale. Good management practices? AI supercharges them. Broken processes and poor communication? AI scales the dysfunction faster than any human ever could.
AI does not fix your weaknesses. It exposes them. And then it amplifies them.
π The Fault Lines Nobody Is Talking About
The latest results from the Program for the International Assessment of Adult Competencies (PIAAC 2023), administered by the U.S. Department of Education, paint a picture that every AI leader needs to see:
| 2017 | 2023 | Change | |
|---|---|---|---|
| Adults at or below Level 1 literacy (below 3rd-grade elementary school reading level)/td> | 19% | 28% | +9 pts |
| Adults below Level 3 (6th-grade elementary school reading level) | 50% | 54% | +4 pts |
Source: PIAAC 2023 / National Center for Education Statistics; APM Research Lab
Now run those numbers through the amplifier.
What happens when millions of people who cannot critically read a paragraph start relying on AI to make decisions? They do not just get bad answers. They get bad answers and believe them completely. And they pass those answers along, to colleagues, to clients, to voters.
AI sounds charming, eloquent, and authoritative. It never hesitates. And if we cannot evaluate what it is telling us, we will believe every word.
Manipulation used to be slow and expensive. Now it is fast and fluent. AI does not just impact productivity. It undermines society’s ability to function when its members cannot tell what is real.
This is not just an education problem. It is the foundation beneath every AI deployment, every agent rollout, every digital transformation. And it is cracking.
π Some Institutions Are Getting It Right
Not everyone is looking the other way. A growing number of universities are embedding AI literacy into their core curricula, and the approach matters. Inside Higher Ed (April 3, 2026) profiled five institutions, and the pattern is clear:
Cornell built a discipline-independent critical thinking module for the AI era. Agnes Scott College is launching an AI literacy curriculum for every first-year student starting Fall 2026. Others, including Bryn Mawr and Richmond, are embedding humanistic inquiry, ethics, and reasoning across their programs.
They are teaching students to think about AI, not just to use AI. The tools will change every six months. The thinking skills will not.
π§ The Three Skills That Determine Everything
If the tool is never going to be the bottleneck, three humanistic skills determine whether AI helps you or misleads you. None of them is technical.
| Skill | What It Means for AI | The Risk When It’s Missing |
|---|---|---|
| Reading comprehension | The ability to carefully read and evaluate AI output, catching false claims, unsupported assumptions, and subtle framing | You accept everything at face value. Hallucinations pass unquestioned |
| Critical thinking | The ability to question, verify, and challenge; knowing when something needs a second look | You trust AI output blindly. No defense against misinformation |
| Communication | The ability to articulate what you need clearly, precisely, and with context | AI amplifies your confusion. Vague input produces generic, unreliable output |
This is not prompt engineering. This is the foundational human skill of expressing your thinking so that others, and now machines, can act on it.
Our own data tells the same story from the AI readiness side. Across 370,000 individuals in 70 countries:
This is not just about the 70 million adults at Level 1 literacy. The communication gap is universal. We see it in boardrooms, in doctoral programs, in global enterprises.
Thirty years of point-and-click software trained us to interact through menus, keywords, and search bars. Twenty years of social media and 30-second videos have compressed how we express complex ideas. We stopped writing in full thoughts. We stopped reading beyond headlines.
The muscle that lets us articulate what we actually mean, with precision and context, has atrophied. And now AI demands exactly the skill we let erode: the ability to communicate clearly, not to a search bar, but to a system that mirrors human conversation.
We see this constantly. A senior leader at a Fortune 500 company was convinced that Claude, ChatGPT, Gemini, and Copilot were all useless. When we looked at the pattern, nine out of ten bad results traced to how he communicated with the tools: vague instructions, missing context, and ambiguous asks. The same gaps that had shown up in team feedback for years were now visible in every AI interaction. The tools were not failing him. His communication skills were.
πΉ The Piano Problem: Why Most AI Rollouts Fail
Here is the typical corporate AI rollout: IT picks a platform, builds an online learning course, maybe schedules a lunch-and-learn, someone creates a Slack channel called “AI Tips”, they create an “AI taskforce” that focuses on features and tech jargon, and leadership says, “go play with it.” Six months later, adoption is uneven, results are disappointing, and everyone blames the technology.
You would never hand someone a piano and a YouTube tutorial and blame Steinway when they cannot play. But that is exactly what we do with AI. This is not a technology rollout. It is a change management challenge.
We worked with a global retailer on a three-phase engagement that looked nothing like a typical AI deployment.
π The Management Gap
We talked in a previous episode about organizations hiring more AI agents than people and not knowing how to manage them.
Right now, middle managers are the fulcrum of AI adoption. They have to evaluate AI-assisted work from their teams. They have to coach people whose skill gaps just became visible. They have to run performance conversations about judgment, not just output. And most of them have zero preparation for any of that.
When a team member uses AI to draft a client proposal and misses a false assumption, that is a coaching moment. But if the manager cannot catch it either, the bad output reaches the client.
And here is what should concern us most: every unchallenged AI output that becomes a decision becomes the basis for the next decision. Bad judgment compounds. Across teams. Across industries. In healthcare, in hiring, in financial services. Systemic failure, hiding behind the appearance of efficiency.
β What You Can Do This Week
If you manage people: Take the last AI request that disappointed your team. Sit down and rewrite it together. Full context. Clear constraints. Specific outcome. Then compare the results. That gap is your training roadmap. Do that every week. Not to catch mistakes. To sharpen how your team thinks before they ask.
If you control AI training budgets: Stop spending it all on features and prompt engineering. Those change every quarter. The best return on investment in enterprise AI comes from something no one wants to fund: communication skills, analytical thinking, and the ability to frame a problem clearly before you ever open the tool.”
We worked with a supply chain team that completed an AI certification program from their vendor and still struggled to get traction. The issue was not the tools. It was foundational. So we ran a six-week engagement focused entirely on problem framing, structured communication, and critical evaluation of output. Their results improved more in six weeks than in the previous six months.
This belongs under people development. Not I.T. training. And most leaders have that backward. A $50,000 tool training program that no one applies is a waste. But focus that budget on communication and critical-thinking fundamentals, and it changes how every tool, model, agent, and platform is used. The skills transfer because they are human skills, not product skills.
The One Question
So ask yourself: what am I amplifying? The tools will keep getting better. That is inevitable. The question is whether we will. Better AI does not mean better results. Better communicators and better thinkers do.
π Resources
- AI Compass: ai-compass.ai
- AI ROI Calculator: roicalc.ai
- All Research & Insights: ai4sp.org/insights
If this episode made you wonder where to start, that is exactly what the AI Compass is built for. Our global partner network in the US, UK, Spain, Brazil, and Australia can help you get started.



