ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

News and Reviews > Official Announcements

I want to try an experiment on the site for March 2012

<< < (19/20) > >>

app103:
and by the way i think the whole "i'll use your cpu coz ur not as smart as me"  thing stinks.
-nosh (March 03, 2012, 01:55 AM)
--- End quote ---

I didn't mean it that way. I'd love to use the CPUs of smart people too, if I could, but unfortunately for me they don't have a Java plugin enabled or they use noscript.  :(

mouser:
I would be pissed off if i visited a website and a java applet started, especially if it were to generate bitcoins, etc.

It's one thing to have a special page for it and a link to the page explaining what's going to happen if they click -- but quite another to just launch it when someone lands on your regular page.

Just my 2 cents.

IainB:
I think that this thread could be used to provide some hard evidence in support of the supposition that the camel was designed by a committee.

mahesh2k:
It's one thing to have a special page for it and a link to the page explaining what's going to happen if they click -- but quite another to just launch it when someone lands on your regular page.

--- End quote ---

Isn't that the case for annoying Google Ads? They start when you surf any website where those are placed. They collect IP, cookies, click pattern and some other data too. Compare that to Bit-coin mining (which is harmless) and then you get the idea. Only CPU power is used without any personal information aggregation and that CPU power is again shared with browser and then remaining is used to solve some random puzzle and generate bitcoin.

Thing is that, there is a difference between how ads sell and likes/dislikes of people surfing on the net. I have yet to see any adsense user with some experience in monetization to put up case against that fact.

IainB:
When is that (basic curiosity) not a good reason for running an experiment?
--- End quote ---
well said -- i agree completely.
what i meant to say was: let me not waste the patience of our visitors and members by just doing random things that have little chance of providing some useful insight; let me wait until i have some experiment that is more interesting.
-mouser (March 02, 2012, 03:50 AM)
--- End quote ---

Well, off the top of my head, and just as a suggestion, and in the hope that this may be of help/use...and without wishing to teach my grandmother to suck eggs....    ;)
(These are not my opinions; it is mostly drawn from past marketing experience and training.)

It probably wouldn't be so "random" (as you say), if you published a more clearly defined hypothesis that you wanted to test in the experiment.
If you defined the experiment as a pragmatic piece of test marketing (market research) - which is arguably an accurate description of what it is likely to be - then, the possible objectives could reasonably include (and taken from the above thread) items as follows. These are merely suggestions that you might consider - I do not know whether these objectives are what you intended, I am just supposing:

Objectives:

* To enable the introduction of advertising into the DCF (DC Forum) as a test trial of them as a possible revenue-generating tool.
* To base the trial results on feedback from the users, regarding their experience of the trial and on the statistical analysis of the usage/traffic of the DCF during the trial. (This will necessitate unambiguous user feedback and clearly defined and measurable performance data.)
* To publish the analysis of the results and the conclusions that can be drawn from them, as a project on the DCF, for users to study and comment on if they wish.
* To provide the users with the ability to disable the advertising (which would be enabled by default) during the trial, if they wanted to (if they didn't have AdBlock+ or similar add-ons).
* To provide the users with the ability to enable the advertising during the trial, if they wanted to (if they did have AdBlock+ or similar add-ons).
* To gather feedback from the users - at the end of the trial and/or during it - about their experience of using the DCF during the experiment.
--- End quote ---
From these objectives, you could work backwards to a hypothesis something along these lines (say):
Hypothesis:
To identify whether there is an optimal level of:
(a) advertising acceptance of the DCF user community during the trial, coupled with
(b) user experience/satisfaction of using the DCF, during the trial.

--- End quote ---
(What you seem to have in this discussion thread so far is a collection of feedback and opinion as to what you stated as being your intention, together with some self-prediction of user experience/expectation. This is arguably of little use for testing the above hypothesis.)
What this hypothesis would probably necessitate is at least five objective metrics, for the trial to be of any real/valid use:

Metric #1 - User population(members of the trial group). (Mandatory.)
If this is a trial marketing exercise, then you do not want to include respondents who are not part of the trial market group.
Thus, when users enter the site, they could be asked whether they agree to being part of the trial at the outset.
If they said "No", then the default advertising would be disabled. These users would then be filtered OUT of the trial for that and all subsequent visits - unless (say) they decided to become part of the trial at a later stage (so you could leave them the option to join the trial at a later stage).
You could also leave them the option to trigger their leaving the trial group at a later stage (to avoid unannounced abandonment by members of the group - which could render the data meaningless).

--- End quote ---

Metric # 2 - User lifespan  (Probably mandatory.)
The length of time during the trial that the user stayed as a member of the trial group.
This could be used as a weighting factor for some of the results.

--- End quote ---

Metric #3 - Acceptance: (Mandatory.)
Measured by the users either:
(a) leaving the default advertising enabled (in those cases where they do not have AdBlock+ or similar), or
(b) disabling AdBlock+ (in those cases where they do have AdBlock+ or similar).

Issues to resolve:
(i) How to determine automatically and with certainty whether a user has visibility of the advertising at the client, or is blocking them at the client.
(ii) If visibility cannot be determined, then how reliable (percentage) is the compliant user confirmation that they have disabled as per (b).

The implication here might be that the accuracy of this metric will be dependent on the compliance reliability of the user.
This also assumes that when users enter the site, they are made aware of the trial and the need to (a) or (b).

--- End quote ---

Metric #4 - traffic/performance data (Mandatory.)
I  assume that you will be able to monitor and gather this data for each session from the point where a user enters, peruses/actions DCF discussion threads, and then exits/drops out of the DCF. This may imply the use of cookies, and the user agreeing up front (as above) to being a member of the trial group, gaining visibility of the advertising and accepting cookies from the site.

--- End quote ---

Metric #5 - statistical analysis of user feedback/experience (Highly desirable.)
For feedback to have statistical veracity or reliability, you will generally need as large an amount of data as you can get hold of:
(a) A large population to survey: in this case a few hundred (say?) might suffice. Having (say) 10% of 20 people state the view that "such-and-such" carries no statistical relevance and would only be of any use if you were (say) trying to kid yourself or substantiate the fallacy of the appeal to the consensus - e.g., similar to the "97% of climate scientists agree that..." kind of logical fallacy of the appeal to the consensus.
You will need to determine/estimate your total max possible population to be surveyed ("X"), and determine the actual population to be surveyed ("Y") - i.e., those users who opt-in to the trial. You will only know "Y" on a suck-it-and-see basis - i.e., after you actually start/finish the trial.

To increase the probability of having as large a population as possible in "Y", you could:
(i) Before the trial, request and encourage co-operation from all DCF members (maybe offer some kind of an incentive or reward?). This is your main and potentially "captive" audience.
(ii) Before the trial, request/encourage co-operation from other audiences - e.g., (say) from users of other blogs/forums - to enter into the trial.
(iii) Before the trial, update those features of the website that might attract members of a population that might formerly have been unable to access/use your site for whatever reason - e.g., say, blind or poor-sighted people, by enabling ARIA technology (Accessible Rich Internet Applications markup) in the website.

(b) At least 60% response rate from that population: this is a general rule-of-thumb used in statistical census-taking in New Zealand and the UK. For your purposes, you might have to put up with less, but, as it diminishes, the reliability/veracity of your statistical analysis diminishes quite rapidly - as per (a). Reliability/veracity can be described as a function of total population size and response rate.

--- End quote ---

Method of collecting feedback and making an analysis:
To improve the feasibility and use of the feedback in analysis, it is probably useful to ensure that there is a questionnaire which asks specific closed (but not loaded) questions, designed to elicit specific objective responses on matters that you have identified as being important to assess for the purposes of the trial and in testing the hypothesis.
Some of the questions may need to be antithetical to cancel out "faking" in the responses.
Where a question may necessarily and unavoidably be likely to provide a subjective response, then the Kepner-Tegoe approach can be useful in averaging out bias in the population of responses. That could probably involve (say) using some importance or weighting factor to multiply each response by, and then taking the average of the results. (That is, not all responses to some questions would necessarily carry equal weight.)
Avoid mixing up the objective response data with the subjective.

--- End quote ---

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version