When patterns in the dataset are aligned with the goal of the task at hand, a strong learner being able to recognize, remember, and generalize these patterns is desirable. But if the patterns are not what we're actually interested in, then they become cues and shortcuts that allow the model to perform well without understanding the task.
To prevent the Clever Hans effect, we hence need to aim for datasets without spurious patterns, and we need to assume that a well-performing model didn't learn anything useful until proven otherwise.

Benjamin Heinzerling: NLP's Clever Hans Moment has Arrived