Northeastern AI policies under spotlight after viral New York Times article

9 hours ago 2

Students walk past university signage on the corner of Huntington Avenue and Forsyth Street May 28. The New York Times published an article featuring Ella Stapleton’s perspective of a Northeastern professor using AI to develop course content May 14.

A May 14 article published by The New York Times drew sudden attention to Northeastern’s artificial intelligence policies, spurring widespread public debate about the extent to which AI should be used in college classrooms.  

The Times’ piece detailed how Northeastern graduate Ella Stapleton demanded a refund for a class and filed a formal complaint with the university after discovering her business professor, Rick Arrowood, was using AI to create his lecture notes.

Arrowood told the Times the materials Stapleton found were not used in class, but instead as online resources. While inconsistencies such as spelling errors attracted Stapleton’s attention, Arrowood told the Times that he had not noticed any issues with the materials at the time. 

Northeastern has long positioned itself as a leader in AI among higher education institutions and was recently an early adopter of Claude for Education, an AI tool by Anthropic. However, policy relating to its use is still “forthcoming” due to the experimental nature of the technology. 

“Northeastern embraces the use of artificial intelligence to enhance all aspects of its teaching, research, and operations. The university provides an abundance of resources to support the appropriate use of AI and continues to update and enforce relevant policies enterprise-wide,” Northeastern Vice President of Communications Renata Nyul wrote in a statement to The Huntington News. 

Northeastern’s policy 125 outlining the use of AI, which was last revised March 31, requires everyone at Northeastern looking to include AI in their work to check regularly for “accuracy and appropriateness.” 

“As a relatively new technology, we must also use AI in a manner that is consistent with university policies and applicable laws, protects our confidential information, personal information, and restricted research data, and appropriately addresses any resulting risks to the university and our community,” policy 125 reads. 

The university has also incorporated AI into curricula; Northeastern offers a master’s degree, concentration and minor in AI, a certificate in AI applications and a Master’s of Professional Studies in applied machine intelligence.

The university has a separate policy for research, entitled “Standards for the Use of Artificial Intelligence in Research at Northeastern.” This policy is based on guidelines released by the National Institutes of Health, or NIH, the organization widely looked to for its recommended “best practices” regarding the use of AI in scientific research. While Northeastern allows the use of AI in research, researchers involved must cite the specifics of how the tool was used. 

For research using human subjects, proprietary data or information that can “reasonably” be used to identify an individual, any use of AI must be evaluated by the AI Review Committee. Similarly, U.S. Government Controlled Unclassified Information, or any other information covered by federal regulations, is not permitted by university policy to be reviewed by AI. Grant proposals and peer-reviewed journal submissions and papers also may not be reviewed by any AI service, according to NIH policy, which Northeastern researchers must adhere to. 

The NIH has not specifically prohibited the use of AI in grant writing, and neither has Northeastern. But this is under the condition that the principal investigator, or PI, keeps track of the AI use in their lab and adheres to the university’s general AI policy.

The university acknowledged in the research policy that biases in AI can harm individuals, but left the responsibility for tracking possible discrimination to PIs, stating that they must confirm that “no such unlawful bias or discrimination results from your use of an AI System for research purposes.”

The university also offers several “best practices” for dealing with machine learning, a type of AI that allows computers to learn without explicit programming.

The separate section dedicated to AI use in administrative work, entitled “Standards for the Use of Generative Artificial Intelligence in Administrative Work at Northeastern,” warns that the misuse of AI can put Northeastern at risk and outlines several “dos and don’ts.”

“The ability to use a generative AI chatbot to get tasks done is a revolution in how people work,” the administrative policy reads. “However, the use of generative AI can put Northeastern at risk if used improperly.”

University President Joseph E. Aoun has long championed the use of AI in higher education. His 2017 book, “Robot Proof: Higher Education in the Age of Artificial Intelligence,” argues for the use of humanics, or interdisciplinary studies that prepare students for a machine-driven world. 

In an article for the Chronicle of Higher Education in July 2024, Aoun similarly argued that because of AI’s all-encompassing grasp on the world, “students should therefore best learn to use it.” 

Read Entire Article