Here are a couple slides excerpted from recent discussions of the Whatsapp acquisition. (The acquisition happened to hit, as I was in the middle of teaching classes on economics and strategy of digital multi-sided platforms.)
I hope these might be helpful in pointing out the woeful limitations of counting eyeballs, looking at historical cash flows, etc.
With the orientation of the conference in mind, I gave a speech making the point that in fact all of our innovation systems are open. Yes, even the patent system! (Indeed, the word “patent” derives from Anglo-Norman “lettre patente,” meaning “open letter.” )
The key point of my speech to this group was that there are important lessons to be learned by carefully studying the differences in open disclosure policies across innovation systems.
Much of the content on that speech can now be found in our new paper: How Disclosure Policies Impact Search in Open Innovation
There are two key points to our paper:
The first point simply clarifies a basic distinction in the nature of disclosure policies. Society’s various innovation systems–academic science, the patent system, open source, etc.–can be distinguished in terms of whether disclosures take place only after final innovations are completed (e.g., final inventions, working technology platforms,[…]
Our traditional menu of corporate strategy options is incomplete — particularly as it relates to organising for innovation.
In the Harvard Business Review article, linked here and below, we begin to illustrate how traditional company models and crowd models fundamentally differ in their strengths and weaknesses in solving problems.
Broadly, the fluid and diverse pools of solvers in crowds harness the benefits of diversity; whereas firm assemble stable and well-coordinated mountains of knowledge that is specific to the recurring problems they face.
Therefore, companies should not just consider traditional company approaches, but also consider crowd-based projects. In so doing companies can effectively extend the range of problems they can go after in their innovation and development processes–particularly the very challenging problems that benefit from diverse experimentation across approaches.
The article also clarifies the broadest taxonomy of approaches of organising crowds to solve problems:
> Complementors (ex: iPhone developers)
> Contests (ex: TopCoder)
> Collaborative Communities (ex: Wiki)
> Crowd (Spot) Labour Markets (ex: oDesk)
A challenge in tackling many of today’s big data problems is that it is simply hard to hire the skills needed to make proper sense of massive data sets.
Some people understand statistics. Some people understand machine learning. Some people understand how to effectively house data and when to spin up a hundred servers, on demand. Other people know how to properly frame the analysis. Still other people know how to deal with data security or intellectual property issues. It is hard to find people with the right blend of knowledge to tackle a big data problem. There are even fewer people who can look up and down an organisation and tally up a list of data and analytics projects that might be considered–just to get started. Bridging both technical expertise and domain or business expertise has been a particular challenge.
This is true in many areas, but especially true in the case of Genomics research. Our ability to quickly and cost-effectively sequence the genome proceeds faster than Moore’s Law, while our ability to generate algorithms to rapidly and effectively analyze these massive data sets crawls ahead at — I’m sorry — academic pace.
In principle, one would theorise that this sort of problem should be one deserving of a crowdsourcing approach to problem-solving, enabling experimentation across different approaches via a large number of solvers. My research team tested this assumption, teaming up with Harvard Medical School.
Here are the results: