
The Trump War on Science™ delivered a new body blow. The National Institutes of Health (NIH) put out new guidance for researchers applying for grant funding. From September, proposals for new and ongoing research projects would be scrutinized for inordinate use of AI tools like ChatGPT in writing the proposals. Even worse, the NIH would limit researchers’ submissions to only six proposals per year.
Naturally, Science magazine is wringing its hands. The new limits could “hinder researchers in need of funding.” It will significantly hamper researchers’ “survival strategy.” Many researchers “are going to be screwed over by this new policy,” and it will “squash collaborations like bugs.” To add insult to injury, the NIH has advanced this guidance just as the Trump administration is “creating yet another hurdle for researchers even as it slashes science budgets.” Oh, the humanity!
What to do? Contrary to conventional wisdom, the problem facing modern science is not a lack of funding; it is too much funding. The academic sciences are the beneficiary of a nearly $100 billion annual stream of government revenue, a substantial slice of it flowing out of the NIH. This funding stream has been increasing exponentially since 1950, with a doubling time of roughly seven years. For that exponentially rising flood of money, scientific discovery has not grown exponentially, but has ticked along at a fairly steady pace, seemingly indifferent to the rising floods of money being “invested” in it.
[RELATED: MIT Prof Slammed for HHS Report on Gender Care—But Science May Be on His Side]
What that flood of money has done is build a network of perverse incentives that have shifted the entire ethic of science, away from the pursuit of discovery to the pursuit of funding. Where funding had once been the means to the end of discovery, funding is now the end itself. Careers are no longer shaped by what scientific discoveries a scientist has made, but by how much research revenue a scientist generates for the university or institute that employs them. The scientist quoted in the Science article who worried about his “survival strategy” was not being hysterical; he was being realistic. In the modern academic environment, if you don’t bring in enough dollars to satisfy your employer, you don’t have a career.
This is why the life of an academic researcher is mostly consumed by the writing of grant proposals, which consumes a disproportionate amount of a scientist’s time and effort. A proposal once submitted has a low probability of being funded, on the rough order of one-in-four for the National Science Foundation (NSF), to roughly one-in-ten for the NIH. When the whole point of the scientific enterprise has become pulling in revenue, the strategy researchers must pursue is simultaneously rational and perverse. If a proposal has a one-in-ten chance of being funded, the gambler’s strategy is clear: submit at least ten proposals a year in the hope that one is funded. This is why scientists find themselves chained to the soul-crushing regimen of the “grants treadmill,” a never-ending scramble for funding that leaves little time, energy, and space for the creative thought that is the foundation of scientific discovery.
Hence, the twin aims of the new NIH guidance, which was prompted by a rise in the number of grant submissions they were receiving, with at least one investigator submitting forty proposals in NIH’s annual grant funding cycle. Given that scientists on the grants treadmill are already stretched very thin, the increasing submissions would have to have had help, which seems to have been coming from increasing use of artificial intelligence (AI) tools, such as Large Language Models (LLMs) like ChatGPT, to do the heavy work of proposal writing.
Like everything else in academia, AI is upending traditional ways of doing scientific research. Nature magazine has been exploring science and the new age of AI since 2023. Like any tool, AI can be useful. It can crawl the published literature and provide useful summaries of past research. It can comb through datasets, large and small, and highlight patterns that may elude human scrutiny. Use of AI by researchers in writing and summarizing papers, analyzing data, and writing up results is also on the rise, more so among the younger cohorts of researchers. Like any tool, though, AI can also be abused. It can generate realistic-looking imagery, for example, or highlight features in images that can deceive or distort, something which recently cost Marc Tessier-Lavigne his presidency at Stanford University.
Most seriously, LLMs are derivative, creating narrative out of what has already been published. This poses special challenges for scientists, whose job is to discover new knowledge, which by definition will not provide data for an LLM to parse, or to approach old problems in creative ways, which by definition have not been thought about previously. When sufficient data are lacking, LLMs have been known to “hallucinate,” making up fake narratives and fake references when they cannot find real ones to incorporate. Narratives generated by LLMs are also easily recognizable: they are conspicuously dull and anodyne, lacking the ineffable spark that indicates a creative mind at work. And AI can be a creativity killer, reducing the diversity of ideas in collaborative brainstorming for new ideas. And there is the alarming phenomenon of AI models bursting out with racist and Neonazi rants.
The NIH’s latest guidance responds to a real problem: the use of AI to generate research grants to game the odds of landing a funded research grant. The NIH’s motivation is pure—increasing numbers of dull and derivative AI-facilitated research proposals will only burden the NIH’s already strained apparatus for evaluating them.
[RELATED: These Top Science Officials Love the Copy-Paste Function]
The NIH has the diagnosis wrong, though. It’s not AI that is the problem here; it is the network of perverse incentives that makes scientists’ choice to use AI as a grant-generating tool the rational strategy. Failing to see the problem of these perverse incentives will doom the effort to failure, as it has in the past. Setting arbitrary limits on the number of proposals a scientist can submit has been tried before, at the NSF. Scientists, acting rationally, if perversely, in their self-interest, fought back and got the restrictions lifted. Their actions did nothing to remedy the perverse incentives that strap scientists to the grants treadmill, making them willing serfs in a system that is built to suppress them, not liberate them to explore science’s “endless frontier.” I predict a similar outcome for NIH’s current attempt.
What is needed is to dismantle the perverse network of incentives that leads to irrational decision-making, such as pursuing grant money as the principal driver for science, and deleterious behaviors that follow, like strapping oneself voluntarily to the grants treadmill. There are alternative models out there for research funding, but these are generally the exception to the prevailing model of research grants. A better solution would be to make these alternative models the rule, not the exception. That they have not is an indicator of an entirely different set of perverse incentives, these applied to institutions, not scientists.
But that is another story, for another day.
Follow Scott Turner on X and visit our Minding the Science column for in-depth analysis on topics ranging from wokeism in STEM, scientific ethics, and research funding to climate science, scientific organizations, and much more.
Image by Jared Gould
The larger issue is that there hasn’t always been government-funded “science”, i.e. funded research
The first two government forays into higher education — the Normal Schools and the Land Grant Colleges — were efforts to share existing knowledge with young adults. The Normal Schools sought to prepare schoolteachers and the Land Grants sought to prepare farmers — professors did their research on their own time and on their own dime.
Then there was the Fifty Years War (1941-1991) when we fought first the National Socialists and then the Communists. Higher Education helped by developing everything from high octane gasoline to the atomic bomb, and it was well rewarded for having done so. But the war ended over thirty years ago and it’s time to re-focus on undergraduate education.
The EVP of Biological Sciences and the Hospital at the University of Chicago, Dr. Mark Anderson, made an argument for uninterrupted NIH research funding (Chicago Tribune, July 7th 2025), appealing to safety and lives, but interestingly, made no reference, as a university executive leader, to research program economics, his costs, university efficiency, or expected financial return criteria. The entire contention is otherwise ad hoc, which is revealing as to funding process and management, as it is to funding uses and economics (https://abcnews.go.com/Politics/confusion-nih-administration-halts-releases-billions-dollars-scientific/story?id=124211213). Another factor is why public research requests from the R1 sector, are over-weight in the health disease category, versus health supply delivery and cost, or versus alternative priority industrial segments such as aerospace, or fusion energy.
The author writes:
>
Contrary to conventional wisdom, the problem facing modern science is not a lack of funding; it is too much funding. The academic sciences are the beneficiary of a nearly $100 billion annual stream of government revenue, a substantial slice of it flowing out of the NIH. This funding stream has been increasing exponentially since 1950, with a doubling time of roughly seven years. For that exponentially rising flood of money, scientific discovery has not grown exponentially, but has ticked along at a fairly steady pace, seemingly indifferent to the rising floods of money being “invested” in it.
In inflation-adjusted dollars, the FY2023 program level (including the new Advanced Research Projects Agency for Health – ARPA-H) was slightly higher (1.3%) than the FY2003 peak level.
<
In other words, no change in this twenty year period–the big increase was after the
end of the Cold War. In addition, scientific discovery has decreased pretty much independent of how much money is spent–the low handing fruit have been picked. The days of Pascal or Newton, where fundamental discoveries could be made in multiple fields, are far behind us.
All that being said, the author is completely correct in that the amount of time spent on grant applications is a real waste of resources for scientists. The solution (cutting funding) is like blowing off your head to cure a pimple. America's standing in the world is in large part the result of its ascendancy in the physical and health sciences–why would you throw that away? Block grants to young researchers would be a more useful way to proceed.