top of page

AI: Your Manuscript’s New Co-Author?

  • Writer: Triple Helix
    Triple Helix
  • 4 hours ago
  • 6 min read
Image Citation: [1]
Image Citation: [1]

Written by Alexandra Bergholt ‘27 

Edited by Andrew Ni ‘26

Imagine walking into a prestigious scientific conference. You expect the familiar buzz of research you know so well, hundreds of researchers debating their findings, poster sessions stretching across the room, excited conversations between scientists exchanging ideas spilling out of lecture halls. What if, however, this noise is replaced by the thrum of machines? What if the researchers, presenters, and attendees aren’t human, but AI agents tasked with presenting and analyzing scientific discoveries?

On October 22, the Agents4Science 2025 conference, hosted by Stanford University, tested this radical experiment: the conference accepted paper submissions from various scientific disciplines, the only requirement? In each step of the scientific process, AI agents do the heavy lifting [2]. This conference represented the First Open Conference of AI Agents for Science, providing a novel approach to research conferences where AI agents serve as both primary authors and editors of research manuscripts [3].

Almost all of us are familiar with AI in our everyday lives through Large Language Models (LLMs) like ChatGPT. After its 2022 launch, ChatGPT reached 100 million users in just two months, making it the fastest-growing consumer application in history [4]. In 2025, ChatGPT now receives 193.33 million visits per day and has an average of 800 million weekly active users as of 2025 [5]. Just looking at the numbers, odds are most of you reading this are more than familiar with this developing technology. 

However, most are unaware of its applications outside of text generation and its vast capabilities, especially AI agents. AI agents are systems that pair LLMs with tools or databases to perform multistep tasks instead of just answering a prompt. In this case, agents were used in all aspects of manuscript production, from hypothesis formulation to data analysis. As part of the conference, each researcher was required to detail how AI contributed to each stage of the research process [2]. 

Human reviewers then assessed the top submissions. Out of 315 papers, 48 were deemed acceptable. Although only 15% made the cut, a figure comparable to the acceptance rate of elite journals, it carried a very different meaning in this context. In traditional scientific papers, a low acceptance rate signals a high standard of rigor. High-profile journals such as Nature and The New England Journal of Medicine boast acceptance rates of approximately 5-8% [5]. Here, however, the low rate took on an entirely different meaning: the uneven quality of AI-generated work, much of which reviewers described as technically coherent but lacking depth or originality. Even so, James Zou, a computer scientist at Stanford University and co-organizer of the conference, argues that the number itself still represents a meaningful shift in the research landscape: “People are starting to explore using AI as a co-scientist” [2].

Yet, despite AI’s growing involvement across all stages of scientific inquiry, most academic journals and conferences continue to forbid recognizing AI systems as co-authors. Such restrictions compel researchers to minimize AI’s role, hindering open acknowledgment of its impact and slowing the development of future-oriented research standards [1, 3]. Although AI presents vast potential in the scientific community, we must remain aware of its potential drawbacks. Risa Wechsler, a computational astrophysicist at Stanford who helped review submissions, reported mixed results. Although the manuscripts were “technically correct,” they “were neither interesting nor important” [2]. Wechsler remains unconvinced that the current agents can “design robust scientific questions” [2]. 

Silvia Terragni, a machine learning engineer at the company Upwork in San Francisco and a proponent of AI’s use in research, disagrees. In a recent study about using AI reasoning in a job marketplace, Terragni gave ChatGPT context regarding problems within her company and asked AI to propose paper ideas. The results were a success: “One of these was the winner” and “selected as one of the three top papers in the conference” [2].

Reluctance to recognize AI’s role reflects the scientific community’s struggle to reconcile tradition with innovation. On one hand, concerns about accountability, authorship ethics, and reproducibility are valid, as AI is unable to take responsibility for errors. However, denying AI authorship risks hiding the entire scientific process. Indeed, the footprint of AI in scientific work is already visible. Large-scale studies have documented spikes in the use of particular LLM “style” words (like delve, realm, meticulous, underscore) in thousands of biomedical abstracts following the public release of ChatGPT [7].

 Simultaneously, research in spoken and unscripted language reveals “AI-buzzwords” have surged in podcasts and interviews, suggesting even our everyday speech is being subtly reshaped by the way we interact with AI tools [8].  According to an analysis of more than 700,000 hours of videos and podcasts, words frequently used by ChatGPT, such as “delve” and “meticulous,” are becoming “increasingly common in spoken language” [4]. As AI systems are increasingly used to generate hypotheses, analyze data, and write and review manuscripts, we must acknowledge such contributions to maintain a clean scientific record. 

This rise of AI-driven science is forcing a reevaluation of what it means to “do research.” As AI agents transition from tools to collaborators, the scientific community must update its norms to reflect this new reality. Rather than suppressing AI’s role, researchers and publishers should focus on establishing clear standards for attribution, accountability, and transparency. At the same time, the scientific process itself is being reshaped by AI-driven experimentation. A growing ecosystem of startups is moving beyond AI as a reasoning tool toward AI as an experimental collaborator. For instance, a survey on “agentic” AI in scientific discovery details emerging systems capable of hypothesis generation, experiment planning, and iterative refinement [9]. 

Companies are now beginning to build fully automated laboratories that connect AI-generated hypotheses with robotic execution platforms. Berkeley Lab’s automated materials facility, described in a 2025 report, demonstrates how AI-guided robots can rapidly design and test new compounds in a continuous cycle [10]. Industry coverage highlights similar efforts across the startup landscape: blogs and venture announcements from SciSpot (2025), Sapio Sciences (2025), and Dakota (2025) describe new companies developing “self-driving” labs [11, 12, 13]. These efforts collectively illustrate how the physical-world bottleneck for LLMs is beginning to erode.

Taken together, these developments show that the question is no longer whether AI should participate in science but how its participation should be governed. As AI evolves from tool to collaborator the scientific community must update its norms to ensure transparency, integrity, and responsible oversight. Recognizing AI’s genuine contributions, while maintaining human oversight, will be essential for ensuring that science remains both ethical and innovative in the age of intelligent machines.


References: 


  1. Hulick, K. (2025, October 24). A conference just tested AI agents’ ability to do science. 

  1. Hulick, K. (2025, October 24). A conference just tested AI agents’ ability to do science. 

  1. Committee,  agents4science 2025 C. (n.d.). Open conference of AI agents for science 2025. Open Conference of AI Agents for Science: 2025. https://agents4science.stanford.edu/index.html

  2. Ramirez, V. B. (2025, July 11). Chatgpt is changing the words we use in conversation. Scientific American. https://www.scientificamerican.com/article/chatgpt-is-changing-the-words-we-use-in-conversation/

  3. Singh, S., & About The Author      Shubham Singh  I’m the SEO and Content Head at DemandSage. I began my career in 2017. Overcoming personal adversity. (2025, October 7). Latest Chatgpt Users stats 2025 (Growth & Usage Report). DemandSage. https://www.demandsage.com/chatgpt-statistics/

  4. Vosse, E. van de. (2023, February 24). Acceptance rates of peer-reviewed journals. EV Science Consultant. https://www.evscienceconsultant.com/blog/acceptance-rates-of-manuscripts-by-peer-reviewed-journals#:~:text=Acceptance%20rates%20according%20to%20journal,than%204%2C400%20research%20papers%20received.

  5. Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2025, July 3). Delving into LLM-assisted writing in biomedical publications through excess vocabulary. arXiv.org. https://arxiv.org/abs/2406.07016

  6. Haughney, K. (2025, September 8). On-screen and now IRL: FSU researchers find evidence of chatgpt buzzwords turning up in Everyday speech. Florida State University News. https://news.fsu.edu/news/education-society/2025/08/26/on-screen-and-now-irl-fsu-researchers-find-evidence-suggesting-chatgpt-influences-how-we-speak/

  7. Gridach, M. (n.d.). Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions. ArXhiv. https://arxiv.org/html/2503.08979v1

  8. Bentley, K. (2025, September 25). How AI and automation are speeding up science and Discovery. Berkeley Lab News Center. https://newscenter.lbl.gov/2025/09/04/how-berkeley-lab-is-using-ai-and-automation-to-speed-up-science-and-discovery/

  9. Ai-powered “self-driving” Labs: Accelerating life science r&d. Tips and Tricks. (n.d.). https://www.scispot.com/blog/ai-powered-self-driving-labs-accelerating-life-science-r-d

  10. Cooke, L. D. (2025, October 3). Robotic scientists and AI Lab Automation: Automating experiments from concept to completion. Sapio Sciences. https://www.sapiosciences.com/blog/robotic-scientists-and-ai-lab-automation-automating-experiments-from-concept-to-completion/

  11. Lila Sciences raises $350M series A: The future of autonomous AI Labs in 2025. Lila Sciences Raises $350M Series A: The Future of Autonomous AI Labs in 2025. (n.d.). https://www.dakota.com/resources/blog/lila-sciences-raises-350m-series-a-the-future-of-autonomous-ai-labs-in-2025

 
 
 

Comments


  • Instagram
  • Facebook Social Icon

© 2024 by Triple Helix 

The Triple Helix is Brown University's in-print and online science journal dedicated to reporting scientific and research-based stories to the Brown community and general public.

bottom of page