Recently, Fudan University released the "Regulations of Fudan University on the Use of AI Tools in Undergraduate Thesis (Design) (for Trial Implementation)", which clearly puts forward the "six prohibitions", and regulates in detail the use of Artificial Intelligence (AI) tools in the process of writing undergraduate thesis (design).
As far as I know, this should be the first time that colleges and universities around the world put forward so many "prohibitions" on the use of AI tools. Therefore, it is not only leading and exemplary, but also will lead to more thoughts and even debates, so that people can guide and regulate the use of AI by students in a more practical way when dealing with the challenges brought by AI.
Possibility of banning the use of AI tools to embellish and translate language
The "six prohibitions" proposed by Fudan University include the prohibition of the use of AI tools to generate or alter raw data and original or experimental results of undergraduate thesis (design) pictures, images and illustrations; the prohibition of the direct use of AI tools to generate undergraduate thesis (design) text, acknowledgements or other components; the prohibition of defense members and evaluation experts to use any AI tools to evaluate students' undergraduate thesis (design), and other six items. components; prohibit defense members and evaluation experts from using any AI tools to evaluate students' undergraduate thesis (design) and other six items. These regulations have a certain pioneering nature, and at the same time, they also have a certain "target" effect, which provides more space for people to discuss how to standardize the use of AI.
It should be said that the above provisions make some good suggestions, such as prohibiting the generation of specific content and prohibiting experts from using any AI tools to evaluate students' undergraduate theses (designs). However, there are also some elements that are worthy of scrutiny, for example, one of them is "prohibiting the use of AI tools for language embellishment and translation".
In my opinion, banning the use of AI tools for linguistic embellishment and translation is hardly a reasonable regulation based on the characteristics of the AI tools themselves.
First of all, AI tools are made for academic writing and have an inherently supportive role.AIs have also come to be viewed more and more as aids rather than replacements. Their main role is to help students improve the fluency of language expression, fix grammatical errors, optimize sentence structure, etc. Especially for students learning and using a foreign language, AI tools can be very useful in language touch-ups and translations.
From the perspective of language embellishment, AI tools can help students improve the accuracy, fluency and logic of their writing, especially in academic papers, making expressions more precise and in line with the norms of academic language.
From a translation perspective, AI tools can help non-native students better understand foreign literature and conduct cross-linguistic research rather than replace their academic work.AI translation tools can greatly improve the efficiency of cross-linguistic academic writing while ensuring accuracy.
Therefore, prohibiting the use of AI tools for touch-ups and translations would deprive students of the opportunity to use this powerful tool to improve the quality of their scholarship.
Second, the use of AI tools is consistent with the goals of academic writing. Academic writing emphasizes logic, rigor, and clarity of expression. the role of AI tools in embellishment and translation is precisely to help students express their ideas more clearly and accurately, and the goal of these tools is not to substitute for academic creativity on the part of students.
In terms of usage, AI-generated text does not have creative thinking; it can only generate output based on user-provided information and existing data. Therefore, the role of AI is mainly to help students improve their ability and level of expression, and it cannot replace students' independent thinking and creativity.
The use of spelling and grammar checking tools to improve the quality of writing is widely accepted in the English-speaking world.AI tools can also be seen as a technological tool in academic writing.Since spelling and grammar checking tools are widely accepted, why not allow AI tools to play a role in language embellishment and translation?
Finally, students studying and using foreign languages have a real need for AI tools. They rely on AI tools for language touch-ups and translations, and the latter also provide them with the necessary linguistic support to help them ensure the linguistic quality and expressive fluency of their papers without compromising academic content. For them, AI tools are an effective breakthrough in the language barrier. If they are prohibited from using them, it rather increases their burden in academic writing and even affects the quality of their papers.
In academia, originality remains key. Academic integrity centers on originality and independent thinking, not linguistic perfection. Prohibiting the use of AI touch-up tools or translation tools will not enhance student originality, but rather may cause students to spend too much time on unnecessary language challenges.
Colleges and universities are still exploring how to meet the challenges of AI
Two years ago, the emergence of ChatGPT posed a great challenge to colleges and universities around the world. Unfortunately, to this day, almost all colleges and universities are in the process of figuring out how to deal with this challenge and have not found a way out with consensus.
In UK and US universities, with the widespread use of AI tools in the academic field, many universities have begun to develop specific regulations and policies to ensure that the use of AI technology does not violate academic integrity and meets academic requirements. However, at present, these regulations usually focus on how to reasonably use AI tools, how to ensure the originality of papers, and how to deal with potential academic misconduct triggered by AI.
In 2023, the University of Cambridge in the United Kingdom issued a guidance document on academic integrity, stating that students may use AI tools to help them with their research and study, such as generating initial ideas, helping with verbal expression, finding information, and conducting a literature review, but that AI-generated content may not be used directly for essay writing or as a substitute for independent thinking. In addition, the university addresses the issue of academic integrity by requiring students who use content generated by AI tools to label the source and explain how the AI tool was used. Any failure to label AI-generated content will be considered academic misconduct.
Similar to the University of Cambridge, the Harvard Writing Handbook, a guidance document issued by Harvard University in the United States, also specifies the use of AI tools. It is required that any content generated using AI tools must be clearly labeled in the assignments or papers submitted by students. Students cannot consider AI-generated text as personal originality. At the same time, when using AI tools for content generation, students still need to analyze, think critically, and integrate AI-generated content, and cannot use AI as a "ghostwriting" tool.
The University of London in the United Kingdom has also included a clause on the use of AI tools in its Academic Integrity Policy, suggesting that students may use AI tools in their courses for data analysis, idea generation, structuring, and other ancillary work, provided that the AI-generated textual content is considered as a reference rather than a final submission. The school emphasizes that if students use AI tools in their papers, they must make a clear statement and explanation. The school will ensure that AI-generated content meets the school's academic standards through course guidance and online tool testing.
Yale University in the United States states in its Academic Integrity System that AI tools may not be used to ghostwrite papers, but students may use AI tools for research and analysis. The university recommends that teachers use the preliminary drafts generated by AI tools as a starting point for students' thinking rather than the final draft, while placing special emphasis on students' originality.
Most of the above practices of famous British and American schools focus on educating students how to use AI reasonably, emphasizing the supplementary role of AI tools rather than replacing students' independent thinking. They emphasize academic integrity education, reminding students to abide by academic norms and avoid cheating. At the same time, by requiring students to explicitly declare their AI usage, schools increase transparency and can also effectively reduce the risk of cheating. Such regulations have good generalizability and can provide lessons for other universities globally, especially in terms of labeling and transparency of AI-generated content.
Difficulty of detection is the biggest difficulty
The biggest difficulty in developing rules for the use of AI is that content is not easy to detect, as seen in the current world of rule-making in universities.
Current university regulations, as well as requests from some teachers, mostly prohibit plagiarizing AI-generated content. However, while there may be similarities between the text generated by AI tools and existing content on the Web, AI does not directly "copy" known text, but rather generates content that is similar to existing content. Therefore, special attention needs to be paid to defining "plagiarism" when formulating such prohibitions. Moreover, it is difficult to make accurate judgments about generated content without integrating it with academic testing systems. This is precisely why universities around the world are currently unable to impose specific prohibitions.
The rapid development of AI technology makes it increasingly difficult to distinguish the content it generates from human writing. For example, generated text may not be easily recognizable by conventional plagiarism detection tools because AI-generated content is often newly assembled, rather than a direct copy of existing content, as is the case with traditional plagiarism.
Meanwhile, while some existing AI detection tools can help identify certain AI-generated text, these tools are still in the developmental stage and are not yet able to accurately determine all the characteristics of AI-generated content.
In addition, AI tools are inherently diverse. It is entirely possible for students to use different AI tools for a variety of tasks (e.g., language touch-ups, translations, data analysis, etc.), and the fact that many AI tools do not directly generate academic content, but are used as aids, makes it complicated and difficult to monitor to simply make prohibitions in one area.
AI rulemaking should reflect the openness of the academic environment
I have noticed that when universities and colleges around the world are dealing with the use of AI tools today, there is a general tendency to encourage their use as an aid, rather than managing it through prohibition. There are multiple reasons behind this, especially due to considerations of the characteristics of AI technology, the maintenance of academic integrity, and the purpose of education.
As mentioned above, the application of AI tools in education is not a mere "replacement", but rather an aid to improve the efficiency and quality of students' learning. Compared with traditional academic tools (e.g., spell checkers, document management software, etc.), AI tools do not replace students' thinking and creativity, but rather enhance their language expression, analytical skills and efficiency.
Universities around the world are generally concerned with originality and academic integrity, while the use of AI tools does not directly threaten originality per se. In this context, banning the use of AI is often inconsistent with the core goals of education. In the hands of students, AI tools are often used to facilitate inspiration generation and improve the quality of texts rather than direct writing. The issue, therefore, is not the use of AI tools, but how to ensure that students maintain originality and academic integrity. Currently, many leading universities emphasize AI as a tool, and despite its strong supporting role, students still need to have their own analysis, critical thinking and creativity, and AI-generated content still needs to be processed, analyzed and refined by students.
Both at home and abroad, the academic environment should encourage innovation and critical thinking. This is why many colleges and universities tend to encourage students to utilize new technologies to enhance their learning and research, rather than banning them. The goal of education is to help students adapt to a rapidly changing technological environment and to develop the ability to think and innovate using technology. Guiding students to the proper use of AI tools through education and mentoring will both safeguard academic integrity and allow students to remain competitive in a society that is rapidly evolving technologically.
In fact, as technology evolves, the field of education should gradually embrace more advanced tools. Technological advances allow us to accomplish academic writing tasks more efficiently, and AI tools are just one part of the equation. In some ways, banning the use of AI tools goes against the times. Modern education should focus on fostering critical thinking, creativity, and academic integrity in students, rather than excluding the use of technological tools. The use of AI translation tools for cross-linguistic academic communication not only promotes the flow of global academic resources, but also conforms to the trend of globalization of education.