When is that (basic curiosity) not a good reason for running an experiment?well said -- i agree completely.
what i meant to say was: let me not waste the patience of our visitors and members by just doing random things that have little chance of providing some useful insight; let me wait until i have some experiment that is more interesting.
Well, off the top of my head, and just as a suggestion, and in the hope that this may be of help/use...and without wishing to teach my grandmother to suck eggs....
(These are not my opinions; it is mostly drawn from past marketing experience and training.)
It probably wouldn't be so "random" (as you say), if you published a more clearly defined hypothesis that you wanted to test in the experiment.
If you defined the experiment as a pragmatic piece of test marketing (market research) - which is arguably an accurate description of what it is likely to be - then, the possible objectives could
reasonably include (and taken from the above thread) items as follows. These are merely suggestions that you might consider - I do not know whether these objectives are what you intended, I am just supposing:Objectives:
- To enable the introduction of advertising into the DCF (DC Forum) as a test trial of them as a possible revenue-generating tool.
- To base the trial results on feedback from the users, regarding their experience of the trial and on the statistical analysis of the usage/traffic of the DCF during the trial. (This will necessitate unambiguous user feedback and clearly defined and measurable performance data.)
- To publish the analysis of the results and the conclusions that can be drawn from them, as a project on the DCF, for users to study and comment on if they wish.
- To provide the users with the ability to disable the advertising (which would be enabled by default) during the trial, if they wanted to (if they didn't have AdBlock+ or similar add-ons).
- To provide the users with the ability to enable the advertising during the trial, if they wanted to (if they did have AdBlock+ or similar add-ons).
- To gather feedback from the users - at the end of the trial and/or during it - about their experience of using the DCF during the experiment.
From these objectives, you could work backwards to a hypothesis something along these lines (say):Hypothesis:
To identify whether there is an optimal level of:
(a) advertising acceptance of the DCF user community during the trial, coupled with
(b) user experience/satisfaction of using the DCF, during the trial.
(What you seem to have in this discussion thread so far is a collection of feedback and opinion as to what you stated
as being your intention, together with some self-prediction of user experience/expectation. This is arguably of little use for testing the above hypothesis.)
What this hypothesis would probably necessitate is at least five objective metrics, for the trial to be of any real/valid use:Metric #1 - User population(members of the trial group).
If this is a trial marketing exercise, then you do not want to include respondents who are not part of the trial market group.Metric # 2 - User lifespan
Thus, when users enter the site, they could be asked whether they agree to being part of the trial at the outset.
If they said "No", then the default advertising would be disabled. These users would then be filtered OUT of the trial for that and all subsequent visits - unless (say) they decided to become part of the trial at a later stage (so you could leave them the option to join the trial at a later stage).
You could also leave them the option to trigger their leaving the trial group at a later stage (to avoid unannounced abandonment by members of the group - which could render the data meaningless).
The length of time during the trial that the user stayed as a member of the trial group.Metric #3 - Acceptance:
This could be used as a weighting factor for some of the results.
Measured by the users either:Metric #4 - traffic/performance data
(a) leaving the default advertising enabled (in those cases where they do not have AdBlock+ or similar), or
(b) disabling AdBlock+ (in those cases where they do have AdBlock+ or similar).
Issues to resolve:
(i) How to determine automatically and with certainty whether a user has visibility of the advertising at the client, or is blocking them at the client.
(ii) If visibility cannot be determined, then how reliable (percentage) is the compliant user confirmation that they have disabled as per (b).
The implication here might be that the accuracy of this metric will be dependent on the compliance reliability of the user.
This also assumes that when users enter the site, they are made aware of the trial and the need to (a) or (b).
For feedback to have statistical veracity or reliability, you will generally need as large an amount of data as you can get hold of:Method of collecting feedback and making an analysis:
(a) A large population to survey: in this case a few hundred (say?) might suffice. Having (say) 10% of 20 people state the view that "such-and-such" carries no statistical relevance and would only be of any use if you were (say) trying to kid yourself or substantiate the fallacy of the appeal to the consensus - e.g., similar to the "97% of climate scientists agree that..." kind of logical fallacy of the appeal to the consensus.
You will need to determine/estimate your total max possible population to be surveyed ("X"), and determine the actual population to be surveyed ("Y") - i.e., those users who opt-in to the trial. You will only know "Y" on a suck-it-and-see basis - i.e., after you actually start/finish the trial.
To increase the probability of having as large a population as possible in "Y", you could:
(i) Before the trial, request and encourage co-operation from all DCF members (maybe offer some kind of an incentive or reward?). This is your main and potentially "captive" audience.
(ii) Before the trial, request/encourage co-operation from other audiences - e.g., (say) from users of other blogs/forums - to enter into the trial.
(iii) Before the trial, update those features of the website that might attract members of a population that might formerly have been unable to access/use your site for whatever reason - e.g., say, blind or poor-sighted people, by enabling ARIA technology (Accessible Rich Internet Applications markup) in the website.
(b) At least 60% response rate from that population: this is a general rule-of-thumb used in statistical census-taking in New Zealand and the UK. For your purposes, you might have to put up with less, but, as it diminishes, the reliability/veracity of your statistical analysis diminishes quite rapidly - as per (a). Reliability/veracity can be described as a function of total population size and response rate.
To improve the feasibility and use of the feedback in analysis, it is probably useful to ensure that there is a questionnaire which asks specific closed (but not loaded) questions, designed to elicit specific objective responses on matters that you have identified as being important to assess for the purposes of the trial and in testing the hypothesis.
Some of the questions may need to be antithetical to cancel out "faking" in the responses.
Where a question may necessarily and unavoidably be likely to provide a subjective response, then the Kepner-Tegoe approach can be useful in averaging out bias in the population of responses. That could probably involve (say) using some importance or weighting factor to multiply each response by, and then taking the average of the results. (That is, not all responses to some questions would necessarily carry equal weight.)
Avoid mixing up the objective response data with the subjective.