Juries, referees, and committee members may not use generative AI. This is according to a preliminary position statement from the Dutch Research Council (NWO).
AI programs such as ChatGPT and Perplexity can produce meaningful-sounding text based on language models. The possibilities seem endless. AI is believed to be on track to have many consequences for society.
But when reviewing research proposals, Dutch Research Council employees are not allowed to use such software, a preliminary position statement now states. All documents that referees,
committee members, and judges receive are confidential. If they are uploaded to an AI program, that could violate confidentiality.
Transparent
This position was posted Tuesday on the Dutch Research Council website. Scientists are, however, allowed to use of generative AI, “given its potential and development potential,” as long as they proceed with full transparency and verify their results.
Dutch Research Council board member Antal van den Bosch, a professor at Utrecht University, is an expert on language and artificial intelligence. “This is where we draw the line,” he remarked on the Dutch Research Council website, “but at the same time we see now and in the long term great opportunities for the field of AI itself and for its widespread application in science.”
At the same time, there are “problematic aspects” to the technology. These, too, he calls “a scientific challenge.”
The preliminary directive is subject to change or become more detailed. The Dutch Research Council has established a working group to develop a new AI policy, which should be ready by the second half of 2024.