November 29, 2006
He reminds us that the flip side to innovating to get ahead (and stay ahead) is learning from, and shutting down, failures, but…even companies thick with innovative products are averse to following through:
If something does not work, the company needs to move on quickly. Failures need to be acknowledged, all possible learning extracted, and then the product should be eliminated.
This is not what happens. Instead, unsuccessful products are left up on the site to rot. Failed experiments become useless distractions, confusing customers who are trying to dig through the options to find what they need and frustrating any customer foolish enough to try them with the obvious lack of support.
He gives good examples from Google, Amazon, and Yahoo — big names in (web) innovation who, nevertheless, allow failed products and services to litter their navigation and soak up internal resources.
Even unsupported products still must be maintained operationally, security patches deployed and, to the extent they use shared libraries and APIs, evolved as the libraries and APIs evolve. Not to mention the customer support complaints from those sad individuals who stumple upon the unsupported product.
November 22, 2006
Flickr recently launched Camera Finder, a “joint” effort with Yahoo! Shopping. Another sign that Flickr is being (slowly) integrated into the Yahoo! Family. Good coverage by Mashable and Paul Kedrosky.
I like what they’ve done. Their camera finder is everything that Webshots’ Photo Gear Guide should have been (two years ago), but wasn’t…and won’t be.
A little history: Webshots’ Photo Gear Guide (or “Tech Guide”) was the first effort at integrating into CNET, a mere months after being acquired. It had a sordid history, and it could have been much better than it is. It’s now been mostly abandoned (which explains some of the empty content for editors who’ve left CNET).
Anyhow, some observations:
First, Webshots’ Photo Gear Guide is butt ugly (and I don’t mean the butt of Grace Park). It’s a nauseating mix of yellow and grey and red. Even two years ago before our new header, it was still pretty sickening to look at. It tried to have the look of a CNET property, with the cobranding of a Webshots property. Which is what half the pages really are (more on that in a bit).
By contrast, Flickr’s Camera Finder pages look exactly like any other Flickr page. The same prominent colors (even in the graphs!). The same kind of navigation.
Second, there is no original content on Webshots’ Photo Gear Guide. All the editors stuff and specs come from CNET Reviews, which is fine. But most of the links send you off to cobranded pages that aren’t even hosted by Webshots (and are even uglier–and now still have the old Webshots header). There is absolutely nothing to tie that content into the Webshots community. Nothing.
Flickr’s Camera Finder, on the other hand, has graphs of the popularity of cameras over time within the Flickr community. When you drill down into the cameras, you see photo search results of Flickr photos taken with each camera, sortable in several dimensions. It feels like another way to browse Flickr photos, and also a way to compare cameras.
Finally, Camera Finder makes you feel like you’re actually learning something. With PGG, you get editor reviews and specs and “best buys” based on price vs performance tradeoffs. All well and good, and I have no doubt that CNET’s reviewers do a better job than Yahoo’s Shopping editors.
But–because you’re not engaged within the Webshots community, you don’t really know how accurate those reviews are, or which camera is probably right for you. By utilizing Flickr’s community, you get a much better sense of not only the quality of photos produced by a camera, but also what kind of photographers are using which models. Find people like yourself. See what they’re using.
Of course, Webshots did not detect cameras two years ago (we do now, and it’s on every photo page unless the owner chooses not to display that information). I seem to recall that Nick and Narendra’s vision for the photo gear guide was exactly what Flickr produced. If you answer the question, “Why did it never quite materialize?” you’ll probably find the core of what’s been missing at Webshots the last several years.
With that said, I’m not sure Flickr’s Camera Finder will see significantly more page views than Webshots’ Photo Gear Guide. It’s very much a niche audience, that likely sees surges during the holiday season and maybe again in late spring. One way to counter this is to meaningfully link to the data from photo and member pages.
It will probably generate more revenue for Yahoo!, since they’re linking directly to Yahoo! Shopping and Yahoo! will get a cut of any sales (unlike CNET). Which partly explains why they put more effort into getting it right.
Anyway, a thumbs-up from me. The more you can leverage the interaction and personal choices made by your members, the better your community will be.
November 5, 2006
I was busy with other things, so I’m just now getting around to checking out Google Custom Search Engine (GCSE). I find I’m a bit disappointed after reading where Ethan Zuckerman explains how GCSE is lacking:
A little poking solves the mystery pretty quickly. Google Coop Search works by searching against the main Google search catalog, retrieving 1000 results and filtering them against the sites you’ve included in your catalog. This makes sense, computationally – these searches are fast, almost as fast as normal Google searches. Rather than conducting 3000 “site:” searches and collating and reranking the results, Google is sacrificing recall, getting 1000 results and discarding those not in your set of chosen sites, which requires one call to the index and a really big regular expression match.
With the result being:
In other words, the little engine I’ve built is useful only if the sites I’ve chosen are relatively high ranking and authoritative sites on the topics I’m searching on.
When I first read about GCSE, I was picturing tens of millions of bit vectors (and entries in BigTable), corresponding to each “custom engine,” and updated with every refresh of their index. Perhaps some smart stuff to make sure entries that haven’t been rebuilt yet use the old index until they are (BigTable seems good for managing that – see my previous entry on the BigTable paper).
I couldn’t imagine a way to scale it practically, but I figured, “Hey, it’s Google…”
Instead it turns out that it’s pretty much a mash-up. Anybody off the street could retrieve the top N results from Google’s API, filter out sites based on include/exclude lists, and dynamically rerank the rest based on preferences.
I’m not knocking it. That’s the definition of dynamic reranking and usually is how personalization is implemented. I’m just disappointed that they’re not doing something way beyond the norm, technically speaking.
Probably more interesting is how they’ll take the data from CSEs and feed some of the keywords and usage data back into Google Co-op.
November 4, 2006
A paper out of NYU explores the question, Was the Wealth of Nations Determined in 1000 B.C.? [pdf]:
What does seem inescapable from this finding (if it is taken at face value despite the caveats) is that development is a very long run process. The tendency of policymakers and international institutions to overemphasize the instruments under their control may have contributed to an excessive weight being placed on the behavior of modern-day governments and development strategies as a determinant of development outcomes.
Their methodology was to measure, from available data, the use (but not the intensity) of cutting-edge technological innovations from 1000 B.C., 0 A.D., and 1500 A.D., and then to run some regressions to see what the correlation is to wealth in today’s nation-states.
Some excerpts from the paper:
For example, in the dataset for 1000 B.C., we consider two transportation technologies: pack animals and vehicles. A country’s level of technology adoption in transportation is then determined by whether vehicles and/or draft animals were used in the country at the time.
In 1000 B.C. the Arab empire and China have an overall technology adoption level of 0.95 and 0.9 respectively, while in India and Western Europe the average adoption level are 0.67 and 0.65 respectively. In 0 A.D. India and Western Europe catch up with China and the Arab empire. In 1500 A.D. Western Europe has completed the transition and is the most advanced of the four great empires with an average overall adoption level of 0.94. China remains ahead most countries with 0.88. But the Indian and the Arab empires have fallen behind. The average overall adoption levels in these empires are 0.7.
Among the interesting implications is the degree to which, they believe, we may be misreading the causes of poverty:
20 percent of the income difference between Europe and Africa is explained by Africa’s lag in overall technology adoption in 1000 B.C., 7 percent is explained by the technology distance in 0 A.D., and 75 percent is explained by Africa’s lag in overall technology adoption in 1500 A.D. This gives a very different perspective on Africa’s poverty compared to the usual emphasis on modern governments. It also shifts backward in time the historical explanations for Africa’s poverty, compared to the usual emphasis of historians on the slave trade and colonialism.
You may also remember the the “latitutde hypothesis” popularized a few years ago, which attempted to correlate a nation’s latitude with is prosperity. The authors here compared their findings:
As emphasized by the previous literature, being far from the Equator tends to be associated with higher levels of current income per capita. Controlling for the latitude of countries, however, does not eliminate the strong positive effect of overall technology adoption in 1500 A.D. on current development. This effect remains statistically significant, though the effect of technology adoption history on 1000 B.C. and in 0 A.D. on current development become insignificant after controlling for the distance to the Equator or after including the tropical dummy.
So, why does the willingness or ability of ancient cultures to adopt new technologies affect us so much today? They offer a couple of pretty mundane hypotheses:
First, it seems to us the simplest hypothesis with which to begin is that adopting one technology today makes it much easier to adopt subsequent technologies in the future […] Second, we suggest (not so controversially) that technology is a principal determinant of a country’s level of development. Hence, countries that currently are the technology leaders are the richest countries and countries that fail to use advanced technologies are the poorest. In short, as Mokyr (1990) memorably argued, technology is the “lever of riches.”
A couple of potential flaws/unanswered questions, which I don’t recall reading in the paper:
- the sparseness of data from 1000 B.C. suggests a couple of key archeological finds could nullify the correlation, if the now-poorer nations could be shown to have adopted technologies earlier, but were later conquered or suffered through natural disasters and evidence destroyed;
- pre-existing wealth, war or strokes of luck may have driven both technological adoption and longer-term propserity
(Link via Marginal Revolution.)