I disagree with general mood that these authors are in the wrong. This is one of the more cyberpunk ways to fight back against LLM encroachment in academia.
Of course, one better would be finding a way to hide text that, when the paper is inevitably used as training data, poisons the model.
I disagree with general mood that these authors are in the wrong. This is one of the more cyberpunk ways to fight back against LLM encroachment in academia.
Of course, one better would be finding a way to hide text that, when the paper is inevitably used as training data, poisons the model.