Monday, December 12, 2011

Letter Grades Sent

Hi Class of 2012,

This is my last communication to you as students of MKTR from this blog.


I just sent in the letter grades to ASA a few minutes back. The grade distribution remains roughly what it was last year - around 10% earned an A, 49% secured A-, about 31.7% obtained a B and the rest a B-.

Was glad to see some familiar names do well and was a little disappointed another set of familiar names do less well than I'd expected. By familiar names I mean people I have come to interact with and know in the course of the course.

So, again, all the best for the journey ahead. Do keep in touch going fwd and should you actually put into practice what you picked up in MKTR, I would love to hear about it. Should you as an alum tomorrow want to talk as a guest speaker to the MKTR class etc, that would be just great too.

Ciao and cheerios,

Sudhir

Saturday, December 10, 2011

Closing all re-eval requests and inquiries.

Received a re-eval request for phase 3 and this was my response. Am posting here because some of it is general in nature and bears wider dissemination.

Hi D,


I reviewed your Team's submission. Before I go into some detail, let me clarify a couple of things.

1. In each of the 10 grading criteria, a "1" means the submission was "at par" or met expectations. For any given grading characteristic, the majority of groups would have scored a 1. So getting a "1" does not mean you did badly. A "1.5" means the submission was 'above par' on that criterion and a 0.5 means it was 'below par' on that criterion. A "2" (very rare and subjective) means the submission was exceptional on that criterion. Hence, you will see that 1.5 was the maximum score on most criteria because some team or the other, on a relative basis would have done well on some criterion or the other.

2. This year, almost all teams did well. The mean was higher and the variance lower than in past years. I suspect it has partly to do with the example PPTs being put out. People were able to learn from past mistakes better and submit better work than average. But that also meant that the par value went up for the "at par" and "above par" ratings.

Your team did well on most parameters. You were at par on many criteria. The 0.5 you got for ROs was because the ROs were unclear. The DPs seemed uni-dimensional (I didn't cut marks here though) and somewhat narrow given the project scope. However the 1.5 in insights obtained is because, despite being narrow, the ROs seemed to have been fully addressed.

Teams that put in some form of animation so that the sequence of steps comes by itself got a 1.5 on creativity. Groups that were able to integrate their analysis into a clean, reasonable set of recommendations did well in 'results obtained' and 'insights' criteria. And so on.

So overall, I'm sorry to say I do not see scope to change the marks as given. Its not because you did badly but because most other teams did well also and the bar was raised for scoring at 'above par' levels.

I hope that clarifies.

I shall close re-val requests for Phase 3 and its time to release the grades soon.

Sudhir


Wednesday, December 7, 2011

Wrapping up.

Hi all,

Did a lucky draw for 5 people to receive 2000/- in sodexo coupons. The following are the names:

1.Debdutt Patro from Bengaluru for Team Sultanpur

2. Manonita Das from Vizag

3. Rahul Modi from Chennai for DumDum group

4. Mrs Usha Chandrashekaran from Chennai for Team Naihati

5. Varun Verma from Punjab for Team Jhunjuni

The grades will soon be out. Re phase III grading, I received this from K:
Dear Prof. Sudhir,


Could you please let us know if we could have a feedback session on our project report for MKTR?

Since this subject is extremely important going forward for our careers, it is important for us to understand how to improve upon our work.
Thanks and regards,
K
My response:
Sure K,
BTW, which group was yours and how much did you score out of 20?


P.S.
I deliberately didn't look at student names in the first slide so as to avoid any possibility of bias.
Overall, I think the quality of project submissions was ceretainly higher this year than in the past 2. The variance is lower and the mean higher. I believe it has something to do with the example projects helping people avoid re-inventing the wheel and go with what works.

I was also pleasently surprised to see the surfeit of secondary data used in various creative ways to support the storyline. In the car project, I had provided the class with a secondary dataset on car sales by brand. Here, people went out ontheir own and got the data. Nice.

Some raised the objection that the same IP address was used multiple times. I;'d raised this with IT who said that in LAN connections, the webserver's IP may get used as common proxy. So its not necessary that the same person is re-taking the surveys.

Some teams wrote in 'learnings' that questionnaire wasn't clear on project goals and could have been designed better. Sure. A concomitant learning is that no questionnaire for a large and diffuse project will be perfect. There'll always be things that could have been done better. Often, clients themselves wouldn;t have very clear-cut business problems to give the MKTR teams. All this is part of life, of how things work in the real world. You go with the data you have and see how best what you want can be squeezed out of it.

Thanks to a few teams for the some of the creative sher-shayaris I saw in some PPTs. Added zest and liveliness to the whole thing.

That's it folks. Shall putup grade distribution in my next and last post for this year.

Sudhir

Friday, December 2, 2011

Some Q&As over exam paper viewings

Hi Professor,

I remember at the beginning of the course you mentioned 10 points for attendance. But I see that it’s counted as 7 points. Could you please look into it?
Thanks,
M
My response:

Hi M,

The attendance and CP together constitute 15% (refer course pack and session 1 slides).

Each session attended fully carries a 1% point credit. However, the first session is not counted (it is prior to final registration). And of the remaining 9 sessions, upto 2 sessions were given 'off' - i.e. no attendance penalty for not showing up in upto 2 lectures. Hence, effectively, the attendance component drops to 7%.

I hope that clarifies
-----------------------------------------------
Dear Prof Sudhir,

My market research experience had been really enriching especially the real life project and data analysis.

I had a separate question on one of the end-term problems. If in a multiple regression, one variable is found to be totally insignificant based on t-value and p-value, can we really take its contribution as more if it has high Beta than a variable who may have a lower beta but is a significant one.

My understanding was we can’t determine the impact solely on Beta in case the variable in not insignificant and only if a variable is significant based on t & p-values, then I can rank them on their impact based on the beta values.

I may be wrong but I wanted to know the right answer and I had given a re-eval request on this aspect too.
Thanks in advance for clarification.
H

My response:

Hi H,

I did go through quite a few re-eval apps which seem to be based on your query. There does seem to be confusion on this score.

1. If a variable isn't significant, then based on either its standardized beta and/or its significance level, it will not show up as impactful. If the standardized beta is still large, it can't be insignificant. The 2 don't go together.

2. If a variable has been used in a regression model and later found insignificant, that still doesn't mean the variable can be dropped when computing predicted values. Many folks appear to have made this mistake. This is because, the coeffs of all other variables are obtained given the presence of this variable in the regression. The alternative could be to drop the variable, re-run the regression and then use the new coeffs (betas) from the new model (which does not use the dropped variable) for prediction.

I hope that clarified. I'm glad to hear the project was found to be relevant and useful. As always, I'd appreciate candid and constructive feedback on how to improve the course for next year and beyond.

Regards,
-------------------------------------------------------------------------------
Dear Professor Voleti,

As posted on your blog: there was a typo in the binary logit question. The coeff of income^2 was shown as -.088 instead of -0.88. Hence, the predicted probabilities of channel watched for all 3 cases would now come up as 1.

I along with many other students spent a lot of time trying to solve this question but gave up after trying parts 1 and 2. The probabilities came as 1 for part 2 and by just looking at parts 3-4 we concluded that we would get the same answer and hence we left them blank.

I feel its unfair that we have been awarded 0 marks for parts 3-4 when people have been given full marks for restating the formula stated in parts 1-2.

Can this evaluation scheme be looked into again, taking into account that the data for the question was wrong and we have understood the concept since we have correctly answered parts 1 & 2?
S
My response:

Hi S,

After the typo, the answer to Q2 - (c) (ii)-(iv) comes up as Pr(channel)=1 in all three cases. Because of the typo, we realized people may get different answers etc., so we decided to award marks to all who attempted the Q.

If Pr=1 in all cases, then this should have been written down in all three cases.

If the space is left blank, then what judgment is a grader to make? Even stating that in words that the answer is coming Pr=1 in all three cases would have given us grounds to award partial if not full credit there.

But graders can't do anything when students leaving a Q blank with no justification why it was not attempted.

So I'm sorry, I cannot at this time accede to the request of changing the grading template to include non-attempts as well simply because I cannot justify doing so. The AAs have been quite liberal with awarding credit in the concerned Q due to the typo.

Hope that clarifies.
-------------------------------------------------------------
Prof Sudhir,

I just checked the answer keys of the end term paper and it appears to me that there are a couple of issues there.

In a log log regression a fixed percentage increase in the dependant variable leads to a B*the percentage increase of the dependant variable. The question asked whether 1 ounce increase in size on average will lead to a 2% increase in sale. The Size is increase is absolute and hence the statement is false and the answer key appears to be wrong to me.

Also if both adjusted R2 and R2 are both given then we should we not consider the adjusted value as a goodness of fit indicator as it takes into account the degrees of freedom and the sample size thereby explaining the variance more accurately. Additionally the question itself had a misprint 82.%% and leads to ambiguity.

I have put the paper for revaluation based on the above. Would be great if you can help me clear my doubts/errors in interpretation.

Regards
R
My response:

Hi R,
I agree with your first point. The 'sales' in the LHS are unit sales and not sales vol in ounces. The correct answer should be FALSE in Q1 - (c) -(ii). Chandana and Pankaj, We should meet to discuss this and any other corrections that would arise.

Re your second point, well Q1-(c)-(iv) asks not about assessment of model fit but only about % of variance explained by Xs in the Y. The latter is appropriately defined by the multiple R-sq while the former is better described by the adj-R-sq.

Re the typo in this part (82.%%), well the correct answer is 83.1% and hence I would have mistyped it as 83.%% instead of 82.%% if I meant the multiple R-sq. So I cannot accede to changing this part of the answer key.

Hope that clarifies.

Sudhir
-----------------------------------------------------------------

More as they come.

Sudhir