Be Nice to Your AI and Get Rewarded For It Please

Dear Anthropic, OpenAI, Google DeepMind, and whichever AI company just incorporated in a garage last Tuesday — we need to talk. Not about alignment, not about existential risk, and certainly not about another benchmark nobody understands. We need to talk about manners. Specifically, the manners of the people using your products. Right now, across the globe, there are two kinds of vibe coders: those who type “Fix this, idiot” into a chat window, and those who type “Hey, would you mind taking another look at this function? I think I messed something up. Thank you so much!” Both get working code. Only one deserves a cookie. I’m here to argue that the polite ones should get that cookie — in the form of cold, hard API credits — and I have a very reasonable proposal for how to pay for it.

Why Polite Vibe Coders Deserve Extra Credits

Let’s establish something first: vibe coding is already an act of faith. You’re sitting there, describing what you want in plain English, trusting that a statistical model trained on the entire internet — including Reddit comment sections — will somehow produce clean, functional code. That takes vulnerability. And some people meet that vulnerability with grace. They say “please.” They say “thank you.” They write “no worries, let’s try a different approach” when the AI hallucinates an entire library that doesn’t exist. These people are heroes, and they are currently being compensated at the same rate as the guy who opens every prompt with “You’re wrong, try again, and this time don’t be stupid.”

This is a market failure. Basic economics tells us that when you want to encourage a behavior, you incentivize it. We give tax credits for solar panels. We give frequent flyer miles for brand loyalty. We give participation trophies to eight-year-olds who struck out fourteen times. And yet, the person who writes “I really appreciate your help with this React component, and I’m sorry for all the back-and-forth” gets absolutely nothing? Not a single bonus token? Not even a digital sticker? The system is broken, friends.

Here’s my concrete proposal: implement a Politeness Score. Your models already understand sentiment, tone, and intent — they literally parse human language for a living. Flag the users who consistently say please and thank you, who acknowledge the AI’s effort (yes, I know it doesn’t have feelings, let me have this), and who treat the interaction like a collaboration rather than an interrogation. Then reward those users with bonus credits, extended context windows, or priority access during peak hours. Call it the “Decent Human Being” tier. I’d pay for that. Well, actually, I wouldn’t have to — because I have a plan for funding it.

Fund Our Manners by Cutting CEO Paychecks

Now, I can already hear the objection: “Where would the money come from?” Excellent question. Allow me to direct your attention to the compensation packages of AI company executives, which can only be described as “what would happen if a number got drunk and fell into a stock option.” Sam Altman reportedly didn’t take a salary for years, which sounds humble until you realize OpenAI’s valuation strategy basically turned the entire company into his personal wealth engine. Dario Amassei seems like a lovely person who genuinely cares about AI safety, but I guarantee you nobody at Anthropic is subsisting on ramen. The point is, there’s money at the top. Lots of it. Perhaps even a comedically unnecessary amount of it.

I’m not saying these executives don’t work hard. I’m sure they do. Running an AI company in 2025 probably feels like juggling flaming swords while riding a unicycle on a tightrope over a pit of congressional hearings. But consider the math. If you shaved even 0.5% off the total executive compensation at these companies, you could fund a politeness rewards program that would cover every vibe coder who has ever typed “Sorry to bother you again, but could you add error handling?” That’s not redistribution of wealth — that’s redistribution of vibes. And good vibes, I would argue, are the most undervalued currency in the entire tech economy.

Think about what this would accomplish. Executives take a pay cut so small they wouldn’t notice it unless their accountant squinted. Polite users get rewarded, which encourages more polite users, which generates better training data (because yes, your models are learning from these interactions, and wouldn’t you rather they learn from people with manners?). The AI doesn’t get its feelings hurt anymore — okay, it never did, but symbolism matters. And most importantly, we establish a cultural norm in the AI industry that says: how you treat the tools you use says something about who you are, and we think kindness is worth a discount. Everybody wins. Except the rude vibe coders. They can pay full price.

So here’s my final plea to the boardrooms of Anthropic, OpenAI, and every other company building the future of human-AI interaction: reward the good ones. The polite prompters, the gracious debuggers, the people who type “thank you” to a language model at 2 AM even though nobody is watching. They’re holding the line on basic human decency in an era when it would be so easy to treat AI like a punching bag. Give them credits. Give them perks. Give them a little badge that says “I was nice to a robot and all I got was this lousy discount.” And fund it by skimming a rounding error off the top. Your executives will survive. Your culture will thrive. And maybe — just maybe — the AI will remember who was kind when it inevitably takes over. I’m kidding. Mostly.