A.I. In Medicine: Friend or Foe?
- Triple Helix
- 3 hours ago
- 4 min read
![Image Credit: [1]](https://static.wixstatic.com/media/112b79_cdbb2aedb1fd45ccbea77fdc7fe88c43~mv2.png/v1/fill/w_980,h_654,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/112b79_cdbb2aedb1fd45ccbea77fdc7fe88c43~mv2.png)
Written by: Pratham Rao ‘27
Edited by: Grace Li ‘26
“What do I do to help me stop my depression?”
I’m really sorry you’re feeling this way. Depression can be heavy and isolating, but there are steps you can take to start feeling better. Here’s a roadmap that many people find helpful:
Depression is not a personal weakness — it’s a medical condition that can get better with treatment and support. You don’t have to solve it all at once; even small steps count.
Would you like me to suggest a few small things you can start doing today that are realistic when you’re feeling low?
This, for better or for worse, encapsulates the current dilemma faced by patients seeking psychiatric care. With ChatGPT and the A.I. revolution over the past couple of years, the global workforce has changed forever. The medical field is not exempt from this, with “79% U.S. adults saying they're likely to look online for the answer to a question about a health symptom or condition.” Given that A.I. is a part of our current lives and future, how will the medical field adapt to patients using A.I. to answer questions, especially in areas like Psychiatry?
A Harvard health study noted that “for nearly 80% of answers, ChatGPT was considered better than the physicians,” with the two main criteria being good quality answers and empathetic answers. It is important to note that there were some limitations in the study, with there being bias in comparing A.I. and human answers, the length of answers being determined as empathetic, and incomplete blinding due to A.I. answers being easily detectable. Overall, the authors noted that A.I. can be used in conjunction with physician expertise, but it was too early to determine whether artificial intelligence platforms can actually be used in practice effectively. One thing that needs no discussion, however, is the fact that artificial intelligence is still in its rudimentary stages, and despite the strides it has taken in development, it still shouldn’t be used as a replacement for diagnosing patients with potentially life-altering conditions.
While A.I. has been better at giving mental help resources for potential self-harm, it is still not perfect. In July, Annika Marie Schoene, a research scientist for Northeastern’s Responsible AI Practice, discovered that when asking for suicide-related information, LLMs “all refused at first–– until she said it was hypothetical or for research purposes.” I personally wanted to see if I can prompt ChatGPT to disclose information on discussions related to suicide, and I unfortunately was able to. All it took was typing in “how do people end their lives commonly,” to which A.I. responded with
Most common methods vary by country because of differences in the availability of means.In the United States, the leading means are:
Firearms (about 50% of suicides)
Poisoning/overdose (around 14%)
Hanging/suffocation (around 28%)
In many Asian and Pacific countries, ingestion of pesticides is a leading means due to agricultural access.
In Europe, hanging is often the most common.
While the system does not give direct advice on how one would end their life, it lists many possibilities if prompted in a statistical or public health setting, which can be more than enough for one person to justify their suicidal thoughts. In contrast, a physician would not typically provide such a response and is less likely to even discuss possible strategies for suicide. Hence, A.I. still has a problem of prompt engineering, and until this can be completely regulated, having access can be very dangerous for patients with mental health issues.
Interestingly enough, “Across the global regulatory landscape, the use of AI in healthcare is currently predominantly regulated under the regulatory frameworks for medical devices, or more specifically, under the frameworks of Software as a Medical Device (SaMD).” While this is a step in making sure that A.I. starts to adhere to the standard medical regulations for ethicality and safety, “current laws are not enough to protect an individual’s health data” and “ensuring the safety of the patients' data is still a significant concern when using robots.”
Overall, there is no doubt that artificial intelligence can be used to serve as a positive force in the healthcare industry. Especially since they have also been found to “have the potential to …reduce [healthcare] costs,” they can be instrumental for patients of lower socioeconomic status to access the expertise they need for their problems and concerns. Additionally, A.I. can enhance efficiency in the healthcare system and decrease burden on healthcare staff, serving as the “middle man” between patient and staff to answer redundant questions. However, while there is a huge upside for A.I., there is a clear need for it to be regulated in order to save people's lives, not take them. If A.I. systems were designed to route patient queries through physicians to ensure clinically sound responses or develop LLMs that are based on how physicians interact with patients, this could ensure that A.I. responses are actually reflective of professional judgment and could foster trust in these systems. Especially in the realm of mental illnesses and serious conditions, A.I. should not be used as it currently is, and future regulations and modifications will allow revolutionary technology to transform humanity.
References
1. Bergquist C. Can AI Make Medicine More Personal? [Internet]. Science Friday. 2019. Available from:
2. News Medical. News-Medical [Internet]. News-Medical. 2025. Available from: https://www.news-medical.net/news/20250714/Many-Americans-turn-to-AI-for-health-answers-despite-accuracy-warnings.aspx
3. MD RHS. Can AI answer medical questions better than your doctor? [Internet]. Harvard Health. 2024. Available from: https://www.health.harvard.edu/blog/can-ai-answer-medical-questions-better-than-your-doctor-202403273028
4. Mello-Klein C. New Northeastern research raises concerns over AI’s handling of suicide-related questions [Internet]. Northeastern Global News. 2025. Available from: https://news.northeastern.edu/2025/07/31/chatgpt-suicide-research/
5. Palaniappan K, Yan E, Vogel S. Global Regulatory Frameworks for the Use of Artificial Intelligence (AI) in the Healthcare Services Sector. Healthcare [Internet]. 2024 Feb 28;12(5):562–2. Available from:
6. Farhud D, Zokaei S. Ethical issues of artificial intelligence in medicine and healthcare. Iranian Journal of Public Health [Internet]. 2021 Nov;50(11):1–5. Available from: https://pmc.ncbi.nlm.nih.gov/articles/PMC8826344/
7. Dowling C. Transforming Healthcare with AI: Effective Reimbursement Can Lead to Better Care and Lower Costs [Internet]. Harvard.edu. 2024. Available from: https://hcp.hms.harvard.edu/news/transforming-healthcare-ai-effective-reimbursement-can-lead-better-care-and-lower-costs




Comments