Generative AI and the era of increased gatekeeping

Generative AI models can create text, images, code, and music faster and in larger quantities than our ability to absorb them. Before ChatGPT was introduced to the world in November 2022, producing a piece of media took longer than consuming it. In 2026, the equation has been turned on its head.

If your job—thus far—involved curating or evaluating the work of other humans, this is a problem. Generative AI is bad news for teachers and professors, editors at magazines and publishing companies, maintainers of open-source projects, academics doing peer-review or replicating studies, and anyone else who must review the work of their peers in order to give them feedback or as quality control.

If you have a job that fits this description, then you’ve probably been inundated with a deluge of low-quality AI-generated content in the past few months. It only takes a few minutes for somebody to “write” a short story using an LLM; it takes a human hours to read and evaluate it.

In response to the increasing burden on curators, organizations are tightening the rules around how they handle submissions. Some are taking the moderate stance of asking AI-generated submissions to be identified and cleaned up prior to submission, but many are banning outside contributions altogether. For example:

Other organizations are placing strict restriction on number of submissions and making submission rules more stringent:

This is a net negative for society. Organizations lose out on potentially good contributions, people early in their careers lose out on a chance to get feedback from experienced professionals, and the rest of us lose because fewer good works make their way into publications and the commons.

I see three possible futures ahead of us.

First: the novelty of using ChatGPT to produce work and throw it over the wall without reading it wears off. It becomes a social faux pas to submit AI-generated work for publication without extensively vetting and editing it. Enough people are named and shamed that new social norms around the use of generative AI emerge. Our societies adapt so that putting your name on a work without verifying its quality is an act that destroys your reputation.

Second: we come up with methods to prove that you have in fact done the work you claim to have done. Like proof of work in cryptography1, but for humans. Submitting anything without proof of work becomes an automatic rejection. I can’t imagine what this would look like, though. More importantly, I can’t imagine that we will collectively agree to put ourselves through the indignity of being judged by an algorithm. But hey points to everything look at the world we’ve made. Society has a high tolerance for algorithmically inflicted indignities.

Third: we enter a new era of gatekeeping, in which most of us can no longer fix a bug in our favorite open-source projects, submit stories to literary magazines, apply for public job postings, or get peer-review on our papers. Unless you’re a well-known name, or you know somebody who knows somebody, or you can get somebody to vouch for the veracity of your work, you’re considered a nonentity. An era of eroding trust, where anything created by a stranger you don’t personally know is considered suspect. An era of increased gatekeeping that only allows some of us to publish, and the rest of us perish.

Personally, I think we’ll land on a combination of the three possible outcomes. Some organizations will name and shame, some will ask for proof of work, and yet others will step up their gatekeeping. And who knows, there’s probably a secret fourth option that I haven’t thought of. I’ve never been great at predicting the future.

That said, I remain optimistic2 about our ability to handle this situation. I believe people are generally nice and just want to help, even the ones sending 5,000 line vibecoded pull requests to open-source projects. Our societies are still adjusting to a strange new technology, and the social norms around its use have not been written yet. Until we collectively figure out how to behave reasonably, we might see slightly increased gatekeeping, but my hunch is that it’ll be temporary.

I believe we’ll eventually get to a point where we all learn to be editors and reviewers and slush-pile readers of our own AI-generated work. That’s an interesting future to consider: one in which generative AI has turned us all into more discerning readers.

Footnotes

  1. Cryptography, not cryptocurrency. The crypto-bros have given perfectly reasonable mathematical techniques a bad name, so I feel it’s important to mention that here.

  2. Sloptimistic? Ha.