Sunday, November 7, 2010

Tying in Course learnings with phase III

Hi all,

Update: It seems to me that Q33 - media habits w.r.t. TV channels has been lost to us. There's *NO* variation in the responses anywhere in Q33 suggesting that the Q didn't work as planned. The ranking question template in qualtrics was pointlessly complex and involved some drag and drop operation that took me a while to figure. Respondents, exhausted at the fag end of the survey, can hardly be expected to get it right.

Well, now that we are dealing with REAL data, some of our learnings will be from such unfortunate events, I guess. Basically, we have lost all usable info on the media habits question. Will likely have to rely only on psychographics mostly to design a marketing communications message for the target segment now, I guess.

Continuing my last post:

We broke our research problem down into two parts - one relating to the supply side and the other, to the demand side. The supply side essentially asks what set of products the firm should offer and the demand side complements it by asking what customers want in financial/retirement planning products. The demand and supply sides are complementary and simultaneous and must be solved iteratively to arrive at a coherent, consistent answer.

Now that we have in some sense mapped the possible set of supply side options to a manageable set of target customer segments on the demand side, the Q arises, what specific analysis methods and techniques might come in use. Pls understand there's plenty of leeway here and many different ways to peel an apple. What I'll talk about here is merely 1 possible way among many alternatives.

Clearly, Factor and Cluster analyses can easily be brought into play - to reduce the # of variables in the psychographics for instance and to explore 'natural groupings' among customers on the basis of demographics (life-cycle stage.age or employment or family size etc), psychographics (risk-return appetite), asset class preference, something else altogether or some combination of all of these. Just remember Kotler gyan about what segments should be like - measurable, actionable, reachable, distinct and all that.

What about the ANOVA and regression stuff we learned? Specific hypotheses you have - such as e.g. "People in their 20s and early 30s are markedly more risk averse than those in their late 30s and 40s", or "there is systematic association between employment type and preference for fixed income/annuities post retirement" that feed into your storyline can and should be inferred using statistical analysis.

OK, what about discrete choice models (like the Logit) that we will do in lec 9? Well, turns out that just like ANOVA and multivariate regression use the 'Analyze > Fit Model' sequence in JMP, so does Logit as well. Hallelujah. Just input a categorical Y into the Y area and see, the 'Standard least squares' method type at the top-right of the screen changes to 'logistic analysis' automatically. Logistic analyses can be used to infer whether particular categorical Ys relate systematically to some set of Xs in a statistically significant way.

For instance, I could have a hypothesis saying "Public sector bank as primary bank choice relates systematically to age (older people perefer this), employment type (govt employees prefer this) and low risk-low return seekers" or something like that.

Well, applying techniques learned in class would be good and should be done wherever opportunity arises. Using methods learned outside of class is very welcome too, just be prepared for Qs on them.

The primary learning in the project comes from phase III, IMHO. It teaches us how to do phase I better based on the mistakes made in phase II questionnaire as reflected in the actual data collected. The biggest learning IMHO is a resetting of our own pre-conceived notions of what analyzing real data is like, of what can and cannot be reliably inferred from the data we do have and so on.

Well, that's it from me.

P.S.
Am prepping lec 9. Hope is to cover Discrete choice and a bit of MDS (Multidimensional scaling) under perceptual maps. Now am unable to locate good MDS tools in JMP and might have to fallback on R for this. Now, I see little scope for MDS usage in phase III, so a demo on R shouldn't be too troublesome, hopefully, IMHO.

Sudhir

9 comments:

  1. Dear Sir,

    I feel it'll be more useful if you could just post the take home messages on your blog.
    The long posts tend to dilute the point.

    Thanks,

    ReplyDelete
  2. Hi Navneet,

    Ever the impatient MBA, eh?

    Agreed and that was (hopefully) the last of my longish posts. Take-home bullet points have their time and place. And that is soon coming.

    This blog is meant for a more involved, context and example laden exposition, IMHO.

    In any case, context is important in MKTR as in other things. Take-home generalities sans context can be dangerous, perhaps.

    Sudhir

    ReplyDelete
  3. Dear Sir,

    It is slightly unclear as to what exactly the project deliverables are. At a macro level, what do we have to analyze from the survey data? Could you please bullet down the same for the benefit of the class.

    ReplyDelete
  4. Hi Zubin,

    Will do that today.

    Sudhir

    ReplyDelete
  5. MEXL has a fairly simple and effective tool for MDS. I hope you can look into it before the session tomorrow.

    m

    ReplyDelete
  6. Yep. We were taught some MDS analysis in ENDM using MEXL.

    ReplyDelete
  7. Dear prof,

    Is it meaningful to include blank or partially filled psychographic data as part of our analysis?

    Regards
    Raghav

    ReplyDelete
  8. Hi Ady and m (proper names would have been nicer, more polite at least),

    I'm unfamiliar with MEXL and I've already prepped for lec 9 using R for some elementary MDS problems and it would be a tad difficult to change at this stage. I've already had enough stumbles in class with new software.

    So, for now, I'll go with what I have. Thanks for the information re MEXL, though. Those who want to use this for the project are welcome to do so.

    Sudhir

    ReplyDelete
  9. Hi Raghavendra,

    It's a call you have to take.

    You can safely drop respondents with missing values if there's nothing systematic about the respondents being dropped out.

    Alternately, you can 'impute' a value, say the mode for that column, as the missing value and move on.

    Like I said, it's your call. Just ensure what you are doing makes sense and produces meaningful output.

    Hope that clarified.

    Sudhir

    ReplyDelete

Constructive feedback appreciated. Please try to be civil, as far as feasible. Thanks.