A: You know I like the idea of using logic and logical deduction to understand how thinking should be done. This idea that beliefs are, or at least should be, the conclusions of deductive arguments is very clear and elegant. But I do worry…

B: You worry?  Tell me about your worries.

A: For a start, I wonder where the premises of these arguments come from.

B: Why they come from previous arguments, of course.

A: But what if I really don’t know what the premises should be? And I haven’t, umm, made any previous arguments.

B: Come now, you always have some premises to work with.

A: But really, I don’t!

B: Well at the very least, you’ve got all the truths of logic. You don’t need any premises to derive ‘P or not P’, or ‘if (P or Q) then P’, and things of that sort.

A: Those won’t get me very far…

B: True, but they’ll keep you consistent wherever you end up going, and if you acccidentally derive one of their negations you’ll know you’ve got a problem in your premises.

But you’re probably worried that you’re just not sure of your premises.  If only there were some numerical way of representing that…  Anyway, you can always proceed by assuming some premises or other and seeing where they would have lead if you had actually held them.

A: That seems cautious and reasonable.

So let’s say I’m willing to assert to some premises. I would worry about what happens if other people don’t share the same premises as me. They’ll derive different conclusions from the same information, won’t they?

B: They will. Is that a problem?

A: It all sounds rather… subjective. I suppose it might be fine for personal things, but how would it work for Science?

B: Well, conclusions either follow from premises or they don’t. That seems a pretty objective sort of thing, doesn’t it?

A: But those premises…?

B: OK, OK. What might you do about those others and their crazy premises?  You can try to derive the conclusions you prefer from their premises. If you succeed, then either you’ve shown that they are inconsistent, so they need to re-evaluate their premises, or new information has generated a contradiction using their premises where there wasn’t one before. Either way they’d be irrational not to come to the same conclusion as you. And then you’re agreeing. Unless they could manage a counter argument that did the same thing. In which case you might have to agree with them.

This sort of exercise is useful because it’s hard, to say the least, to figure out all the logical consequences of your premises. So you’d be doing them a favour, bringing the scientific community closer to consistency, and keeping everyone up to date with the implications of new information.

Go Science!

A: That’s a very optimistic picture. But what if we needed lots of premises to generate implications we could get information about. How would we know which one to update?

B: We wouldn’t. At least not directly. For each possibly guilty premise we’d have to figure out some implication it had that didn’t involve the other premises, and then get some information about whether that implication held.

B: That might sometimes be very hard to do, but the general idea doesn’t seem to be very troubling or difficult, does it?

A: I suppose not. Nevertheless, if we were to have this whole discussion again, replacing ‘know’ with ‘believe’, ‘logical implication’ with ‘posterior’, ‘premises’ with ‘prior’ or ‘likelihood’, and ‘information’ with ‘data’ then I’m pretty sure I would suddenly find grave doubts and insoluble complexities sufficient to put me off the whole idea.


Leave a Reply

Your email address will not be published. Required fields are marked *