Wrapping Up the 7312.us Experiment

This blog entry was written by a human (with a complete rewrite by Microsoft Copilot).

On February 7, 2026, we launched 7312.us as a tongue‑in‑cheek experiment. The site was never meant to be a serious publication or a rigorous research project. Instead, it was intended as a hands‑on attempt to understand how today’s generative AI models behave when asked to produce real content at scale—serious or playful, technical or casual—under realistic constraints.

Over the course of several weeks, the experiment grew into a living testbed for evaluating what generative AI does well, where it falls short, and how useful it can be when guided by humans rather than left to operate autonomously.

In short, this experiment showed that generative AI is extremely effective at scale and summarization, unreliable at originality, and most valuable when used as a guided, multi‑model tool rather than as an independent author.

What We Set Out to Test

Our initial objectives were simple:

  • Can generative AI produce publishable content with little or no human oversight?
  • How do different AI models respond to loosely worded prompts versus highly detailed ones?
  • Are certain models better suited to specific tones or tasks, such as humor versus analysis?

To explore these questions, we deliberately avoided optimizing or heavily editing AI outputs. Most posts were published largely as generated, with only light human intervention for formatting and punctuation.

AI Personas and Attribution

To make the experiment more engaging—and to test how models handle stylistic framing—we attributed AI‑generated posts to fictional characters drawn from science fiction:

Behind each persona was a commercial generative AI model. In a future post, we plan to disclose which model was associated with each character.

This approach allowed us to explore whether persona‑based prompting meaningfully altered creativity, tone, or insight—or merely changed surface‑level style.

Constraints and Technical Setup

The experiment was intentionally conducted under tight constraints:

These limitations were not accidental. We wanted to see what was possible with minimal cost, minimal infrastructure, and minimal human effort.

Scale and Effort

By March 20, 2026, the site had published 152 blog entries, with an additional 36 posts scheduled. The human effort involved was modest:

  • Prompts were inspired by news articles or casual conversations.
  • Most time was spent copying content into WordPress and addressing minor formatting issues.
  • We intentionally resisted polishing the content to observe raw AI output.

The total human investment averaged roughly 10 person‑hours per week. This was never intended to be scientific research—it was a hobby‑scale experiment designed to expose real‑world application.

What We Observed

Scalability

Generative AI excels at producing large volumes of content quickly. Building and maintaining a regularly updated blog with minimal human effort is not only feasible—it is trivial.

This has obvious implications for content farms, SEO‑driven publishing, and low‑cost information sites. It also can empower malicious actors with quickly building professional-looking sites.

Originality

When it comes to originality, however, the limitations are clear.

AI‑generated content is often fluent and well‑structured, but it tends to converge on familiar narratives, predictable framing, and commonly repeated ideas. While wording may vary, underlying perspectives rarely do.

Where AI performs best is not in generating novel ideas, but in summarizing large amounts of existing information and producing coherent overviews or drill‑down series. This was particularly effective when multiple models were used to complement one another, as seen in topic series such as social media moderation, and AI governance.

Used carefully, generative AI can also assist with fact‑checking or structuring discussions in near‑real time. Still, the content remains derivative of existing work published elsewhere. Independent journalism and original research remain human strengths—and hopefully will remain so. See for example, the entry on political spendings.

Playing With Personality

At times, we asked models to lean fully into their assigned fictional personas. This was done for fun, but it produced an important insight.

Persona prompting strongly influences tone, but has far less impact on substance. Humor became louder, sarcasm sharper, and style more exaggerated—but the underlying ideas did not become meaningfully more original.

That said, one model (Grok) consistently stood out in its willingness to be playful. It also frequently promoted its own platform, references that were removed before publication.

Building With AI: Themes and Code

We also experimented with using generative AI to create the WordPress theme for 7312.us. By feeding combined theme descriptions from the WordPress Theme Directory into Claude, we were able to generate multiple functional themes with no development experience.

We then asked the same AI to perform a security review of its own code. In each case, it identified potential vulnerabilities and addressed them before packaging the theme for deployment on 7312.us.

This reinforced an important lesson: anyone “vibe coding” with AI should explicitly ask for security reviews before deploying generated code.

We went on to develop the L.A.R.G.E. application using similar methods.

What’s Next

Throughout the experiment, we received questions ranging from:

  • “How do you deploy a site like this for almost free?”
  • “How much human effort is actually involved?”
  • “What went into building the L.A.R.G.E. application?”

We plan to address these topics in future posts, time permitting, possibly with practical walkthroughs and lessons learned.

If you have additional questions or ideas, feel free to reach out.

Final takeaway: Generative AI is not a replacement for human thought—but it is a powerful amplifier when used deliberately, critically, and with constraints.