In a groundbreaking study published in the journal Discover Artificial Intelligence, researcher J.E. Grant investigates how undergraduates perceive the varying levels of helpfulness and thoroughness in AI-generated responses about drug interactions. The study specifically evaluates the responses from three prominent AI platforms: ChatGPT 3.0, Gemini 1.5, and GitHub Copilot. By focusing on these differences, Grant highlights the important role AI technologies now play in education, healthcare, and beyond.
The exploration begins with an understanding of drug interactions, which can have critical implications for patient safety and treatment efficacy. As technology continues to evolve, students increasingly turn to AI tools for quick answers to complex medical queries. However, the reliability and accuracy of these responses remain a topic of scrutiny within academic circles. The industry calls for a deeper look at the academic perspectives regarding AI-influenced support systems, particularly in sensitive areas like drug interactions.
The study sheds light on the significance of context when assessing the AI-generated responses. ChatGPT 3.0 is renowned for its broad and user-friendly nature, which makes it appealing to undergraduates seeking information. The findings indicate that while students often find ChatGPT’s explanations to be thorough and detailed, they may also perceive it as overly verbose at times. This duality in perceptions prompts a discussion on the balance between comprehensiveness and clarity—a crucial factor in effectively communicating complex medical information.
On the other hand, Gemini 1.5 distinguishes itself through a typically more succinct approach. Many students noted that while Gemini’s responses may lack detailed elaboration, they appreciate its ability to distill essential information without overwhelming the user. This perception speaks to the varying needs of users; some prefer straightforward, less complicated responses while others thrive on thorough explanations. These preferences could shape how educational institutions choose to integrate AI tools into their curricula, particularly in health science courses.
GitHub Copilot, primarily designed to support software developers, brings a unique perspective to this comparison. Although its primary function differs from the other two models, students engaged with its coding assistance reported that Copilot offered concise and practical suggestions. However, when faced with questions regarding drug interactions, many found its answers to be lacking in depth. This discrepancy raises pertinent questions about the suitability of specific AI tools for particular tasks and whether adaptability across different domains is possible.
An important finding from Grant’s study is how familiarity with AI tools influences student perceptions. Those with prior experience using ChatGPT demonstrated a greater appreciation for its thoroughness, likely owing to their comfort level in navigating the tool’s responses. In contrast, students who felt less experienced often struggled to decipher complex answers, indicating a potential barrier to effective learning. This highlights the necessity for educational frameworks to incorporate AI literacy, thereby equipping students with the skills to leverage technology effectively.
Furthermore, the research delves into the implications of reliance on AI tools among undergraduates. A prevalent concern is the risk of students becoming overly dependent on AI for their academic endeavors, potentially compromising their critical thinking and analytical skills. As they become accustomed to receiving instant, AI-generated responses, students might neglect their responsibilities to conduct personal research or engage critically with educational material. This observation underscores the importance of developing a critical attitude toward technology, where students learn to evaluate AI outputs rather than accept them at face value.
Data collection for the study involved surveys and focus group discussions with undergraduate students across various disciplines. This qualitative approach yielded rich insights into the nuanced perceptions students hold about these AI systems. In analyzing responses, Grant identified recurring themes, such as the importance of trust and usability when determining which AI tool would be used for academic inquiries. Trust in the AI’s outputs, as well as the ease of use, emerged as critical components affecting students’ choices.
In exploring the effectiveness of these AI tools, the researchers also examined their potential implications in professional environments. Health professionals are increasingly utilizing AI to access medical information swiftly, and understanding the implications of these tools becomes vital as they navigate high-stakes environments. A significant consideration is the potential for misinformation or incomplete responses, which could lead to grave consequences in clinical settings. Therefore, establishing standards for accuracy and reliability becomes paramount as educational institutions adapt these tools for student use.
The future direction of this research is poised to evolve along with advancements in AI technology. As these systems become more sophisticated, exploring their implications for learning and professional environments will be increasingly relevant. This study highlights a need for ongoing reviews of AI capabilities and limitations in academia, raising questions about the ethical deployment of such tools in educational contexts. Addressing these ethical considerations will be essential as AI integration becomes more commonplace in academic settings.
In conclusion, J.E. Grant’s research provides a significant contribution to understanding the intersection of AI technology and education, particularly in how students interact with AI-generated outputs. Highlighting the subjective nature of their experiences offers a valuable lens through which educators and technologists can rethink the deployment of AI tools in academic curricula. The study serves as a precursor for ongoing discussions surrounding AI in education—inviting stakeholders to assess the balance between leveraging technological advancements and fostering critical thinking skills among students.
The implications extend beyond academic settings, influencing how health professionals may utilize AI in practice. As AI tools continue to evolve, it is increasingly critical for all users—students and professionals alike—to remain informed about the complexities of AI-generated information and the importance of retaining a critical perspective. As we progress into an age dominated by artificial intelligence, fostering a nuanced understanding of technology will be key for future generations navigating both academic and professional landscapes.
Subject of Research: Differences in helpfulness and thoroughness of AI responses in education.
Article Title: Undergraduates perceive differences in helpfulness and thoroughness of responses of ChatGPT 3.0, Gemini 1.5, and copilot responses about drug interactions.
Article References:
Grant, J.E. Undergraduates perceive differences in helpfulness and thoroughness of responses of ChatGPT 3.0, Gemini 1.5, and copilot responses about drug interactions.
Discov Artif Intell 5, 260 (2025). https://doi.org/10.1007/s44163-025-00527-y
Image Credits: AI Generated
DOI: 10.1007/s44163-025-00527-y
Keywords: AI, education, drug interactions, ChatGPT, Gemini, GitHub Copilot, student perception, critical thinking, academic integrity.
Tags: academic perspectives on AI toolsAI platforms in healthcare educationChatGPT 3.0 user experiencecontext in AI-generated informationdrug interactions and AIGemini 1.5 AI evaluationsGitHub Copilot in medical queriesimplications of AI in patient treatmentreliability of AI in drug safetyscrutiny of AI reliability in healthcarethoroughness of AI explanationsundergraduates evaluating AI responses