List of Management Theories

This is a list of academic management theories which I find worth mentioning.
(work in progress)


  • Porter 5 Forces 
  • BCG Matrix 
  • Cost vs. Value Leadership 
  • Resource-based view of the firm 
  • Dynamic Capabilities 
Management Theories 
  • Scientific Management - Frederick Taylor 
  • Adminstrative Management Theory - H. Fayol 
  • Burocratic Management - Max Weber 
  • Behavioural Theory of Management - Elton Mayo 
  • Chaos Theory 
  • Systems Management 
  • Evolutionary Theory 
  • Stakeholder Management 
  • Contingency Theory 
  • Principal Agent Theory 
  • The Concept of Coopetition 
  • Knowledge Management - Nonaka 
  • Shareholder Value Management 

Organizational Behavior 
  • Missionary Organization 
  • Theory Z Organization 
  • Maslow 
  • Mintzberg 

Innovation / Business Model 
  • Creative Destruction - Schumpeter 
  • Disruptive Innovation - Christensen 
  • BMI Canvas 
  • Explorative vs. Exploitative Innovation 

Product Development / Operations 
  • SCRUM and Agile Development/Agile Manifesto 
  • Kanban 
  • Lean Management 
  • Complexity Management 

  • Transactional vs. Transformational Leadership 
  • Efficiency vs. Effectiveness - Drucker 
  • Upper Echelons Theory 
Management & Sociology 
  • Social Network Theory - Granovetter 
  • Network Effects 
Group Dynamics 
  • Irrational Exuberance 
  • Nash Equilibrium 
Group Psychology
  • Social Exchange Theory 
  • Social Interdependence Theory 
  • Social Identity Theory - Tajfel/Turner 
  • Social Comparison Theory - Festinger 
  • Competition & Collaboration - Morton Deutsch 
  • Group Think 


reflective vs. formative models

Reflective Model 

Items <-- Construct 

Construct: Drunkeness 

- uncoordinated walking 
- glossy eyes 
- vomiting 
(items are related) 

Formative  Model 

Items --> Construct 

- Beer 10 liter 
- Vodka 1 liter
- Wine 1 liter 
(unrelated )

Therefore, formative measures define, produce, or cause the construct rather than vice versa !


Basic Statistical Concepts

nominal - male vs. female/  frequencies , percentages (non-parametric)
ordinal - e.g. Likert scale / first,second, third (non-parametric)
interval - discrete, parametric , continuous (eg temperature)
Ratio level - usually interval data, zero point reflects absence of characteristic

Discrete - adult/ non adult 
Continuous - angry to super angry 

Test Statistic = Systematic Variance / Unsystematic variance
We are comparing the amount of variance created by an experimental effect against the amount of variance due to random factors (such as differences in motivation, or intelligence)

what is the probability that our samples are from the same population . You basically compare the means of two or more samples
it is a measure of unsystematic variance or variance not caused by the experiment

r-value (Effect Size)
is simply an objective and standardized measure of the magnitude of the observed effect. 
Pearson Correlation Coefficient
r = .1 (weak effect) 1% of variance between variables is explained
r = .3 (medium effect). 9% of variance between variables is explained
r = .5 (strong effect) 25% of variance is explained

Significance - Chance of Error (being wrong), in other words the chance of a finding being due ot error
The chance of the null hypothesis to be rejected where it is actually true.
in Business this is accepted

p < .05

are standard scores. it states the position of a raw score in relation to the mean of the distribution, using the standard deviation as the unit of measurement
z = raw score - mean / standard deviation

Standard Error 
the  standard deviation (or variability) of sample means. The higher the SE, the more the sample means differ from each other
The lower it is the more it accurately reflects the entire population

Mean: Sum / n
Median: right in the middle of samples
Mode: the most occuring

Standard Deviation 
Average distance of the values from the mean

Variance Extracted 
Summary measure of convergence among a set of items representing a latent construct.
It is the average % of variation explained among items

Type 1 Error (False Positive) 
Accepting effects that are in reality untrue

Type 2 Error (False Negative) 
Rejecting effc├ęcts that are in reality true

Construct Validity (relationship betweeb measurement instrument and the construct)
Discriminant, Convergent, nomological validity

Discriminant Validity
Eg how good do the items of the construct of innovation differentiate from frome the construct of strategic validity

Convergent Validity
How good are the items for the innovation construct converging ?
If they do not converge the are likely not measuring the same phenomenon
- Cronbach Alpha, cut-off value > .70
- Composite reliability, cut-off value > .60
- AVE Average variance extracted, cut-off value AVE > .50
(AVE = average squared factor loading)

Indicator reliability / validity
- significant factor loadings of items >.70, t-values > 1.645

phenomenon in which two or more predictor variables in a multiple regression model are highly correlated, meaning that one can be linearly predicted from the others with a substantial degree of accuracy

Variance inflation factors (VIF) measure how much the variance of the estimated regression coefficients are inflated as compared to when the predictor variables are not linearly related.
Use to describe how much multicollinearity (correlation between predictors) exists in a regression analysis. Multicollinearity is problematic because it can increase the variance of the regression coefficients, making them unstable and difficult to interpret.

Parametric Tests 
Kolmogorov Smirnov Test 
if p > .05 distribution is probably normal 

Levene Test 
tests hypothesis that variances of two samples are equal
if p > .05 variances are more or less equal 


Cycle time, Velocity, Lead time


In Scrum, velocity is pretty much the same as cycle time.

Velocity measures what a development team is able to deliver in terms of developed product backlog items within a sprint.

From this blog https://leanandkanban.wordpress.com/2009/04/18/lead-time-vs-cycle-time/ I got this nice definition

Lead time clock starts when the request is made and ends at delivery. Cycle time clock starts when work begins on the request and ends when the item is ready for delivery. Cycle time is a more mechanical measure of process capability. Lead time is what the customer sees.
Lead time depends on cycle time, but also depends on your willingness to keep a backlog, the customer’s patience, and the customer’s readiness for delivery.
Another way to think about it is: cycle time measures the completion rate, lead time measures the arrival rate. A producer has limited strategies to influence lead time. One is pricing (managing the arrival rate), another is managing cycle time (completing work faster/slower than the arrival rate).

Sprint Retrospectives do increase velocity over time. Knowledge, insights are shared which leads to team learning which affects the velocity in a positive manner.
Hence, retrospectives act as a moderator between developing software and output.


Which KPI's should I focus on after a mobile app launch ?

These are some additional "quick&dirty" ideas around the question: What are the KPI's that I should take a look at after launching a mobile app (free-to-play in particular. btw, see article below on "Key Metrics for App Monetization." and "Healthy Retention Rates".

(free pic under CC)

1. Rolling Retention
In my short article on "Healthy Retention Rates", I primarily focused on rolling retention. This is: Take a look at a cohort and track their life time. Averaging this figure gives you the average life time.

2. DAU Frequency Retention
By that I mean a methodology which takes a snapshot from one specific day and takes a look at that daily cohort by measuring how many of those that have been active today have been active on a daily basis over the past 7 days. This also gives you an indication of recency. I have seen top mobile games achieve a 70% DAU Frequency retention. In other words, of that daily logged in users, 70% had logged each day for the past 5 out of 7 days! Of course, the DAU basis should be reduced by the number of new daily logins, so that you really have only loyal DAU.

3. Predictive LTV and Milestone Tracking

Define Milestones which could give you an indication as to whether your mobile game will succeed or not.
These two findings are from Tapjoys research on "predicting the future LTV of your your users."
  • "Reaching a critical point of 1,000 users who make > 3 purchases is a good indication that an app will ultimately top $1MM in revenue. 84% of the apps with 1,000 or more users who completed three or more in-app purchases within the first 90 days broke that $1MM threshold.
  • 35% conversion rate from 1st to 3rd purchase was the critical number for breaking the $1MM revenue threshold."
4. Recalibrate your own benchmarks constantly. 

I previously mentioned that D1 (40%), D7 (20%) and D30 (10%) is a good Western benchmark for midcore games. In my experience, East-Asians take a a different approach, which is  slightly more short term on this. I remember one Japanese executive mention the following retention rates as successful.

D1 > 50%
D3 25-30% !
D7 ~ 20%

In his talk, he stressed D3 which seems to empirically work for him to predict if a game is successful or not.

Anyway, these are just preliminary thoughts. Please comment below for further exchange.