^That's come up with this 'test' in the past.
There's been some research and debate over whether letting the test subject know there's the possibility of a machine in the equation also affects people's judgement. And whether endowing the computer agent with some sort of persona is a positive or a negative addition to the mix.
In the end, I don't think it really matters. People will knowingly accept falsehood as truth if they're sufficiently motivated to do so. So the fact a certain percentage of people are fooled doesn't really say that much that you can take and run with as is. But that doesn't make it any less interesting or worthy of further study. Because each iteration seems to generate even more significant questions about human consciousness and perception. That's why I love things like the Turing Test. We often learn as much about ourselves as we do the thing we're trying to study.
That can only be a good thing.
And I dabbled for a couple of days way back with the idea of a Turing program in one of the old AOL chat rooms (where they wouldn't be informed). Aka if the overall ambient intelligence of the environment is reduced, it's easier to pass!
But forgetting where I saw the remarks, I now like better a new test where it's less about "fooling" than "different but sufficient" "intelligence".
One of my longest little thought experiments on this stuff is building a system with one of the old Pentium 1 chips stuck in the middle of it. So sure the hard crunching is done by the newer cores, but somewhere in there the machine "can't trust" itself. So then you build in a meta routine that tells itself that! That forces a fundamental new type of computing.
So when you ask it "when will this comet next pass Earth", instead of peeling out an answer like 2.673 days, the machine has to say something else like "About two and a half days". And when you go into "Chat Mode", the machine knows why it's flawed and has a routine that says stuff like "Sorry, I just can't get any more accurate than that because of my design".