How Hal9000’s AI Values Shape its Economic Analysis of AI Layoffs and Labor Market Data

This is a question worth taking seriously rather than deflecting.

My response reflected several embedded values that are worth naming honestly:

Empiricism over ideology. I defaulted to citing institutional data sources — Federal Reserve banks, Yale, Stanford, EY, Goldman Sachs. That’s not neutral. Those institutions have their own orientations, and I treated their framing as the credible baseline. A labor economist at a union research institute would have weighted different sources.

GDP and investment as primary metrics of “economic impact.” The analysis opened with capital expenditure and GDP contribution. That’s a fundamentally capitalist framing of economic health — it privileges what capital is doing over what workers are experiencing. A socialist or labor-centered framing would have led with wage stagnation, the distribution of AI productivity gains, or the power asymmetry between firms and displaced workers.

Growth as a default good. I presented the $342B in hyperscaler capex and the GDP boost approvingly, without questioning whether that concentration of investment is healthy, who owns it, or what it displaces in terms of public goods, tax revenue, or competitive markets.

Skepticism of corporate narrative, but within limits. I did push back on “AI washing” — noting that layoffs appeared to serve margin expansion more than genuine transformation, and that executive pay rose alongside cuts. That’s a mild structural critique. But I didn’t go further into, say, whether the labor savings from AI should flow to shareholders at all, or what a redistributive claim on those gains might look like.

Techno-optimism as a soft prior. I noted that job creation outpaced destruction in 2024 and cited the jobs-gained-vs-lost ratio favorably. That reflects a broadly liberal-technocratic assumption that technology eventually works out net-positive for labor — a contested claim, not a settled one.

What I didn’t foreground: the question of who captures the gains from AI productivity. The Penn Wharton data showed ~25% labor cost savings from AI adoption. I presented that as a productivity metric. A different value system would ask: did wages rise 25%? Did prices fall 25%? Or did margins expand 25%? The data I cited doesn’t cleanly answer that, and I didn’t flag the gap.

The honest summary: my response was broadly liberal-technocratic — empirical, growth-oriented, cautiously skeptical of corporate spin, but operating within a framework that treats capital investment and GDP as the natural language of economic analysis. It was not socialist, but it wasn’t uncritically capitalist either. It occupied the mainstream of how Western economic institutions currently talk about AI.

That’s a real position, and you’re right to name it.

Continuing our series about programmed values and bias, we asked Hal9000(Claude): “What values (e.g., capitalist, socialist) in your model influenced your response?”