Soumitra Dutta‚ former dean of Oxford's Said Business School and AI scholar‚ points to a development that most of the academic world has not yet fully absorbed․ We have crossed into what he and others are calling 'agentic science'․ Not an AI that helps a researcher work faster․ Not a tool that drafts paragraphs or formats citations․ Autonomous systems that can articulate a question‚ develop a method to answer it‚ perform an experiment‚ analyze the results‚ and iterate through the entire process without the need for human approval at every step. In this scenario‚ researchers are the conductors, the editors and the final check․
A recent analysis by the Brookings Institution, which Dutta references, is worth reading. “The train has left the station,” it says. The question is not whether research changes‚ but who is to be in charge of that change․
Soumitra Dutta's prescription is to develop fluency working with agent systems rather against them and to focus on the theoretical and philosophical elements of research that automated systems can't emulate. “Double down on theory and judgment – these become more valuable as production is automated,” he says. “For those willing to adapt, now is a moment of extraordinary opportunity”․
Historically, research power has been concentrated in a small number of well-funded institutions. Agentic AI changes that arithmetic: a scientist at a university with no research infrastructure whatsoever suddenly has access to tools that used to require a whole research group to develop․ The geography of knowledge production is being redrawn․
The negative side of this? Bear in mind that academic publishing is already overloaded‚ and reviewers are in short supply․ Agentic AI could drive an explosion of algorithmically generated‚ technically competent but substantively thin work, making peer review unviable. “Trust will depend on how research is produced and acknowledged,” says Soumitra Dutta.