New model releases have a weird side effect: they secretly train the user.
When results suddenly look better, people assume the upgrade did it. But research highlighted by MIT Sloan suggests something else is happening: people start giving clearer instructions. They add context. They define the goal. They include an example.
The "improvement" shows up because the prompt improved. That's confirmation bias doing accidental upskilling.
Don't wait for the next release note
Practice prompt skills on purpose:
- Clear goal
- Real constraints
- Edge cases
- A definition of "good" vs "bad"
- A quick way to verify
A meaningful chunk of performance gains is already sitting in the user, unused.
Source: LinkedIn activity (direct post URL not provided in the exported text).
Referenced link from the original post: https://lnkd.in/ez5ZBAWK