My Favorite TED Talks of 2009

YouTube Preview Image

My other favorites were these:

  • Tim Berners-Lee
  • Bonnie Bassler
  • Rosamund Zander
  • Willie Smits
  • Dan Ariely
  • Liz Coleman

I’ll post their talks when they come out, but you can check them out from the program guide in the mean time.

What were your favorites?

  • My favorites, almost in order:

    Willi Smits (w0w)
    Ray Anderson
    Liz Gilbert
    Liz Coleman
    Shai Agassi
    Daniel Leibskind
    Bonnie Basler
    Bill Gates Q&A
    …and thay guy with the intelligent power outlets that save lives and save energy :-)

  • kevindick

    I couldn’t finish watching this talk. It’s hokum. Sure, it appeals to how people want to believe the world works. But where’s the evidence?

    I stopped watching when he used scripted instruction in Chicago as a negative example, when in fact, the evidence is that such instruction methods are the only proven way to improve student performance for the full spectrum of demographic groups.

    And who cares about janitors being involved in patient care in hospitals. That’s a symptom of a broken health care model, not something we should aspire to. The vast majority of people will naturally show empathy in face to face situations. It’s wired into us. All you have to do is not make rules or incentives against it. So this is not an example of people doing anything particularly wise.

    And the Mike’s Hard Lemonade situation is actually rather instructive. In any test, you get some Type I and Type II error. So you can _always_ find at least some examples to argue for making a test looser or tighter. The question is how the test performs on average. All the evidence is that relying on human judgment is dominated (humans have simultaneously higher Type I _and_ Type II error) by even a simply algorithm in a lot of situations. I would argue that the problems with the child welfare system are that there’s _too_much_ human judgment.

  • I know I’m guilty of the same thing on other subjects, but how can you ask “where’s the evidence” when you stopped before the evidence was presented?

    I watched the talk again just now to see if I still felt like it rang true, and while it does lose a little something outside of the context of the rest of of the TED talks — which are orchestrated in a particular sequence for maximum impact and meta-message — I have to say I still think Schwartz is dropping bombs.

    One issue I should point out is that the talk was edited. To fill in some of the back story (and evidence), you should know that the 20 minute talk on Practical Wisdom is a distillation of a presumably well-researched book that is being written. Here is a paper on the subject, which gives a number of references that couldn’t appear in the talk, and I’m guessing the book will be more thorough still. You might find his previous TED talk (based on a popular book), The Paradox of Choice, a bit more suited to your tastes, and hopefully convince you at the very least that he’s worth a second look.

  • kevindick

    It’s a matter of Bayesian priors. When I see someone making obviously emotional arguments where I know the evidence contradicts them, my prior is that the rest of what they say is likely to be ill founded. It’s perfectly rational to not waste one’s limited attention in such situations.

    I’ll give him another shot. I’ll start with the paper and if it looks like it’s got a rational argument, I’ll watch the rest of the talk.

  • kevindick

    OK, so I read the paper. I’m sorry, but I’m even less impressed now. This simply is not science. There is no testable hypothesis put forth. There is not even the hint of a proposed experiment.

    In fact , the examples he uses are completely ill-motivated (e.g., teachers and doctors). He expresses no measurable criterion of goodness on which the current practice set forth in these examples fall short. If he did, we might be able to propose some experiments to test whether practical wisdom has anything to offer in these cases.

    Moreover, he makes witch doctory appeals to real science such as the psychology of judgement and neural networks without actually exploring their application in any detail. I happen to know a fair bit about both these topics and I think, on balance, they argue against his position. I’d be willing to entertain his view if he bothered to make the effort to construct a real argument from these perspectives, but he doesn’t. I have to admit I was nearly apoplectic at him trying to casually invoke their authority as supporting his view.

    Here is the minimum artifact he has to produce before I’m willing to consider his arguments: he has to propose an assessment instrument for practical wisdom, a “practical wisdom quotient” if you will. Once he puts that stake in the ground, we can do any number of experiments to test his various assertions.

    I can definitely think of a path for doing this. So it’s not like I’m dismissing him out of hand. In fact, I’d be willing to bet that there are a variety of situations where a higher PWQ score among participants lead to better outcomes. But I’d be willing to bet even more that these are limited to relatively small groups (100 people or less) repeating fairly similar interaction patterns.

  • Daniel

    I pretty much agree with Kevin across the board…The talk is boring and the paper is just awful.

  • @Daniel: I didn’t realize ad hominem attack was a valid scientific argument :-)

    @Kevin: Is The Black Swan (the book) science? Taleb’s arguments suffer from the same “lack of evidence” and the book can be picked apart in the same way that you are picking apart Schwartz. This is not the point.

    I think the issue (other than your objection to his argument/science) can be boiled down to your statement about dominated strategies. I wish you’d finished watching the talk where he gives evidence that incentives — even the most directly self-interested ones — can lead people to make the wrong aggregate decision while the lack of those same incentives can lead to the better outcome for all. It’s pure tragedy of commons. Remember that the dominated strategy in prisoners dilemmas is the best one for all parties, so it’s not a valid argument to appeal to Pareto optimality. I think the crux of Schwartz’ observations (if you will) is that a society that depends on rules and incentives alone is across the board worse (for everyone) than one that can foster, cultivate and appeal to what he calls practical wisdom. And furthermore, an over-reliance on rules an incentives — no matter how “perfect” they are — undermines the ability of humans to achieve the practical wisdom that will get them to a more optimal ESS. His claim is that the optimal ESS is the one where rules and incentives do exist and are carefully crafted and updated, BUT also every individual has practical wisdom and exercises it.

    I believe this to be the case, and I know the cognitive science, evolutionary game theory and computational social dynamics literature pretty well myself, and I believe his claim is very well supported. And I also believe that societies can achieve this better state of affairs, we are not relegated to that which can be achieved through rules and incentives alone. I see the evidence for this when I travel and immerse myself in other cultures, and when I look at the sub-cultures in the U.S., and over time. Not that any society has a monopoly on practical wisdom or is close to being great — that requires that a vast majority of of a society’s individuals are practically wise. But rather that there are pockets and flashes that say to me that it is possible. The trend is non-monotic, but the arc “bends towards” practical wisdom, and I don’t believe this is wishful thinking. I believe that it’s based in reality and the evidence is all around if we look for it.

  • Jo Jordan

    I’m interested in the scientific test @kevindick might suggest.

    I’d hazard a guess that we are muddling ontology and epistemology.

    Action calls for decisions within a context. So the test was whether or not a moral decision could be made and was made.

    Situations are also path-dependent or one-offs. The measure is the variety of response that emerges and our willingness to judge the outcomes. A standard outcome is failure. Think of going into an art gallery and seeing 100 of the same prints – FAIL. We want 100 quite distinct objects that are made after all from the same total set of ingredients but in different combinations.

    This debate is also a political issue. Who is allowed to take part in a debate about morality. I would argue that to disbar anyone whose right has not been taken away in open and contested judicial process is to deny the essential spirt of democracy.

  • kevindick

    I’m honestly having a hard time understanding why you think I should invest any more time in this.

    There’s a huge difference between Schwartz and the Black Swan. Taleb makes distinct predictions about outcomes. We’re currently living through one of the confirmations of those predictions: within a generation, financial institutions that rely on Gaussian-based risk models will experience a shock that is supposedly nearly impossible that forces them out of business. Schwartz makes no testable ones I saw.

    Also, I didn’t make any statements about dominated _strategies_. I certainly didn’t appeal to Pareto optimality. My statement was about the quality of judgment. It’s hard to buy into promoting human judgment when human judgment has been shown to be pretty bad outside a narrow range of circumstance. Nevertheless, I think we fundamentally disagree about the implications of Pareto optimality and Nash equilibrium in the Prisoner’s Dilemma if you think defecting is _wrong_ in the single iteration game. IMHO, it’s not. That’s the point. You need a repeated game. Once you have that, cooperation becomes rational. No magical moral wisdom needed.

    If you can point me to a formal treatment about his theories of moral wisdom and ESS, I will read it. If you can point me to an assessment instrument for moral wisdom, I will review it. If not, my Bayesian prior says listening to the rest of his talk is a waste of time.

    Note that I’m not saying moral wisdom doesn’t exist. It’s merely a set of evolutionary psychology traits that serve as either fast-and-frugal heuristics or cognitive anchors for cooperation in sub-Dunbarian groups. None of this is new, BTW. But Schwartz seems to be trying to elevate it as useful in some formal decision making sense, which I won’t accept unless he produces a formal treatment.

    I think we’ve exhausted the public benefit to further discussion of this unless we can identify the aforementioned formal artifacts. But I’d be happy to chat about it directly then report back if we make any progress.

  • Daniel

    @Rafe: I still agree with Kevin. I’ll even quote him.

    “Note that I’m not saying moral wisdom doesn’t exist. It’s merely a set of evolutionary psychology traits that serve as either fast-and-frugal heuristics or cognitive anchors for cooperation in sub-Dunbarian groups. None of this is new, BTW. But Schwartz seems to be trying to elevate it as useful in some formal decision making sense, which I won’t accept unless he produces a formal treatment.”

  • I don’t think defecting is wrong in a single iteration game, but this does get at the crux of the matter and why I think Schwartz is right (despite the lack of acceptable proof). There is no such thing as a single iteration game anymore, if there ever was. And what the “practical wisdom approach” does is gets us to each act in accordance with that reality. And the really interesting thing is that if we all do act as if it’s iterated, we all get the cooperator’s payoff, and then it doesn’t matter whether I’m right in saying that there are no single iteration decisions.

  • kevindick

    Then I think we agree more than it appears. The ev-psych literature pretty convincingly argues that many of our social propensities exist to create a framework for viewing interactions as repeated.

    Of course, the same literature also convincingly argues that there are limits, the most famous of which is Dunbar’s number–a limit on the size of cooperating social groups in primates. For humans, the estimate is 150 members.

    This is why I object to Schwartz. He presents moral wisdom as something you have to learn, when evolution provides it to us from birth (provided we don’t actively extinguish it).

    What evolution doesn’t provide is any framework for working in super-Dunbarian groups. That’s what incentives are for, which is what Schwartz is arguing against. So in my view, he has things precisely backwards.

  • I wouldn’t hold ev-psych literature out as being super-convincing of anything :-) As you point out, it’s all circumstantial and narrative argumentation and it’s pretty easy to argue both sides of any coin within an ev-psych framework. That said, I am not blasting ev-psych as unscientific. I think logical narrative argument is an integral part of science. And even at its most speculative (which ev-psych is) it is a precursor which leads to sharpening of assumptions, good experiments, and falsifiable predictions.

    I think evolution provides us with the basics in the way you suggest (tribes of dozens). I think that incentives (and rules) are a great way to scale those basics to large societies. Schwartz agrees — says they are absolutely mandatory if you listen to his talk — but argues they are not enough to get us to an even better equilibrium. He argues that practical wisdom, hand in glove with incentives and rules, is the way. His argument is in the style of ev-psych, that is to say narrative.

    I think the only part where you and I disagree is how good his argument is and whether it’s contradicted by current evidence. I’m happy to agree to disagree on this point.

  • kevindick

    I know what you mean about the ev-psych literature general. But I’m actually talking about the _real_ experiments that show differences in cooperative behavior among primates. So no, those are not simply logical narratives.

    I’d love to see some evidence that he can get us to a better equilibrium. But I’m skeptical that it doesn’t rely primarily on a rational understanding of incentives.

    It would be nice if he or someone else would propose an experiment that involved cooperation in groups of, say, 500. Then there could be three treatment groups and a control. One gets moral wisdom training by Shwartz, one gets an optimal mechanism design by an economist, one gets both, and one gets nothing.

    My money is that the mechanism design is significantly better than the moral wisdom training and not significantly worse than both. 95% confidence intervals. If the experimental design was acceptable, I’d put up $1000 and give 2-1 odds.

  • Now you’re speaking my language.

    But I have to clarify two things: (1) Practical/moral wisdom is not something you can teach; you learn it via experience. (2) Neither Schwartz nor I am arguing which is “more important”, just that the combination is superior to either alone.

    Thus, if we can agree on a mechanism for choosing already practically wise participants for the Schwartz group, and we can agree on a definition of what it would mean for the combination to considered “statistically significantly superior”, then we have a bet. I’ll ignore the odds part since we don’t even have the basic proposition nailed down yet.

  • Jo Jordan

    Aren’t you all missing the point?

    1. We know we can manipulate people. We don’t want to.

    2. What model will you use to describe emergent behaviour? As you are well up on this, can we use a Lorenz model to describe healthy behaviour as Marcial Losada has done? Frederickson & Losada, 2006 I think in the American Psychologist.

    If you are interested, I could send you some more links.

  • @Jo, would love to see any links (esp. publicly available), fire away!

  • kevindick

    Wait a minute Rafe. If we can’t affect how much moral wisdom people have, what are we arguing about? It seems like there’s nothing we can to to get to this better equilibrium then.

    However, this does suggest an easy way to resolve the question. Old people should show higher levels of cooperation. That could well be data that already exists.

  • Daniel

    My brother points out that the nuclear waste example Schwartz uses is flawed (at least in the way he presents it)

    When surveyed 50% of people are willing to have a nuclear waste dump near them.

    When offered 6 weeks salary only 25% of people are willing to have a nuclear waste dump near them.

    Schwartz uses this example to advance the notion of incentives confusing a responsibility to the greater good.

    Problematic is that his second example frames the situation differently. If the government is offering me money to do something, I *strongly* question their motives, my personal good (nothing wrong with that), AND the greater good.

    If anything the addition of incentives has caused people to take a closer look at the situation, and they realize the nuclear waste is worse than they originally thought. So while incentives have re framed the problem, I am not sure this is a bad thing.

    At ~13:00 Schwartz cites Obama “We must ask not just is it profitable but is it right” To me it is apparent in this aforementioned example that the addition of incentives has caused people to re-evaluate whether the nuclear waste dump is “right.”

  • @Daniel, I’ll take a page out of Kevin’s book and say, nice theory, but all the data on cognitive dissonance disagrees with your interpretation. This aspect of the talk is a “no brainer”.

  • @Kevin, we CAN affect how much moral wisdom people have by being living examples. It’s just not “teachable”. Kids (and adults) do what we do, not what we say. When we say one thing and do another, kids model the later as soon as the rules are able to be skirted or no longer apply. This comes out in odd ways for some people as adults. They say they want to be nothing like their parents, yet somehow that’s exactly what they end up being like.

  • kevindick

    Uhh, … OK. That sounds pretty fishy to me. How then do I identify people that have moral wisdom?

    Put it another way, how do I measure who is setting a good example?

    If you can’t tell me this, we can’t possibly have a bet because there’s no way to construct an experiment.

    And you better not say that we look for people who cooperate more. Then all you’re saying is that people who cooperate more are likely to cooperate more.

    I will absolutely stipulate that societies with high levels of cooperation now will tend to have high levels of cooperation at in the future. Cooperation is somewhat culturally stable.

    But that doesn’t tell me how a society with high levels of cooperation now gets to even higher levels of cooperation in the future.

    Tell me what the proposal is and I’ll tell you how to test it.

  • Jo Jordan

    While I get some links together, @kevindick, don’t your comments prove the case?

    In demanding that you test a proposition, you have taken a moral position. Yet, you don’t put your demand to a moral test.

    By your own standards, you must define a test for the moral position you have taken.

    Is your position moral?

  • kevindick

    @Jo. No, my position is not moral—it’s amoral (not immoral either). I thought that was clear.

    I demand to test a proposition to know if it’s right. The application of that knowledge is what’s moral or immoral.

  • Jo Jordan

    @kevindick

    You simply refuse to discuss the morality of your position – that doesn’t make it less of a moral stand. It simply makes it an arrogant stand.

    You have to show us why testing is so important. It’s one thing to discover positivism and to know we can test things. It is another to claim that testing is the most important thing to do.

    That is what you have to demonstrate – for your argument to be logically consistent – with a test!

  • Daniel

    @Jo

    Perhaps we don’t think the morality of the request (to test) is really relevant. I’ll concede arrogance if need be.

    Considering what Schwartz is proposing, I really don’t think a test is too much to ask. He’s already sold everyone with a 20 minute talk. Before his ideas spread like wildfire is the proper position not to challenge them?

    (Obama undoes rule to protect moral conviction, http://www.nytimes.com/2009/02/28/us/politics/28web-abort.html?_r=1&hp )

  • The moral conviction “rule” is an interesting example. I will attempt to represent Schwartz’s position, but of course ultimately it’s just my own since I haven’t spoken to him.

    Obama is doing the right thing by overturning. You can’t entirely legislate or incentivize morality. At some point we must call people responsible for their decisions and judgment calls, and NOT let them off the hook with catchall protections. Not only must we call people responsible, but we must leave some room for moral judgment lest we train people out of thinking critically and exercising compassion.

    Will this automatically lead to practical wisdom? No, that comes from experience and role models. But without any personal responsibility left on the table, it can’t develop. And if the penalties for breaking the rules are incommensurately large relative to the situation, practical wisdom will be trumped by self-interest.

    A quick point about moral conviction passes: Forcing a health worker to do something against their conscience is just as silly as giving them cover for breaking the rule that violates their conscience.

  • Daniel

    I don’t have a problem with this. Certainly there is some balance with the amount of rules and incentives. But I think Schwartz’s work is better suited for a mainstream book than a science journal.

  • Well, the talk is based on a mainstream book he is writing. He made that clear in the unedited version of the talk, and I pointed that out at the top of the comments thread. We got sidetracked by cluttering the issue with science :-)

  • kevindick

    @Jo

    I think you badly misunderstand the purpose of this discussion. I don’t believe there is any terminal value to what Schwartz or anyone else thinks is moral behavior. If you want to call that arrogant, fine. But technically, it’s amoral.

    Implicitly, Rafe thinks I should take Schwartz’s information into account in my own behavior because it has instrumental value in achieving my own terminal goals (many of which he shares).

    So I am not asserting testing is the most important thing. I am asserting that testing is required to settle the question of whether there is any instrumental value to Schwartz’s assertion.

  • Daniel

    @Rafe

    I know, and I’m guessing it will be popular. I believe Schwartz first cluttered the issue with science (not all of which he fully understands) and this is largely what started this debate.

  • Pingback: The Limitations & Dangers of Incentives « The Emergent Fool()