A couple of weeks ago Kevin and I went around on the topic of whether or not science is “broken”. We came to the point of agreeing that we have different basic assumptions of what constitutes “utility”. And because of this, while we could agree that each of our arguments made sense logically, we ultimately end up with opposite conclusions. After all, for something to be broken it means that it once served a purpose that it no longer is able to serve due to mechanical/structural failure. And to have a purpose means that it has value (i.e. utility) to someone.
So whether science is broken or still works depends your definition of utility. Kevin and I agreed on a measurement for scientific utility, based on (a) how well it explains observed phenomena, (b) how well it predicts new phenomena, and (c) how directly it leads to creation of technologies that improve human lives. We can call it “explanatory power” or EP for short. …
- “It depends…”
- There’s a difference between a good decision and a good outcome
- There’s a difference between being broke and having no money
- There’s a difference between a winning person and a winning player
- It’s all one long game
- The long run is longer than you think
- Inaction can be very risky behavior
- Embrace the beauty of uncertainty
- Objectivity is the only thing that’s impossible
- Whatever your “leak”, that’s what will get you in the end
- Luck and skill are two sides of the same coin
- Material wealth is a manifestation of interior wealth
- There’s time enough for counting… when the dealing’s done
- There’s no difference between value and values
- Your word is mightier than the law
- Win-win beats win-lose any day
- Trust everyone, and don’t count the cards
By this I mean just what you think I mean.
Is science dysfunctional (i.e. functioning against its stated purpose) and could it be fixed? I will leave it to you to determine what science’s stated purpose is, though by any standardly accepted definition, I claim that science is broken. I’d like to run an experiment here to try to either change my belief or solidify it.
In the comments below, I invite you use the Like buttons to vote on what you believe. You have only three boxes to choose from: Broken, Not Broken, and Undecided. I respectfully ask you to first use the appropriate Like button and only then add your arguments/comments/questions if you have them. Also, please categorize your arguments/comments/questions by making them replies to of one of the three top-level boxes (if you “think outside the boxes” I will delete your comment; sorry it’s my experiment :-)
In order to begin the debate, I will refer you to two blog entries which …
Several years ago I became aware of Eliezer Yudkowsky’s “AI-Box Experiment” in which he plays the role of a transhuman “artificial intelligence” and attempts (via dialogue only) to convince a human “gatekeeper” to let him out of a box in which he is being contained (resumably so the AI doesn’t harm humanity). Yudkowsky ran this experiment twice and both times he convinced the gatekeeper to let the AI out of the box, despite the fact that the gatekeeper swore up and down that there was no way to persuade him to do so.
I have to admit I think this is one of the most fascinating social experiments ever conceived, and I’m dying to play the game as gatekeeper. The problem though that I realize after reading Yudkowsky’s writeup is that there are (at least) two preconditions which I don’t meet:
Currently, my policy is that I only run the test with people who are actually advocating that an AI Box be used …
I’m doing some research for an upcoming talk I’m giving and I’d appreciate hearing your honest answer in the comments below.…