by HughM388 Sat Aug 01, 2020 5:24 pm
Based on what the stimulus tells us, I still don't see how we can conclude that Daniel and Carrie disagree about (D). To arrive at (D) we need to make a fairly audacious inference—which for an identify-the-disagreement question seems illicit, even if the question is at the back end of the section. Please, if anyone still reads and uses this forum, correct me where I'm wrong.
I think we can pretty quickly see that Daniel definitely agrees with the proposition in (D), which is clearly a restatement of Daniel's argument. But on Carrie's count, we know only that she believes that moral obligations are the sole requirement of a morally good action. Beyond that, the stimulus and (D) tell us nothing.
The palaver above about Sally and her requirement for good teeth is descriptive and nicely argued (though I will note that saying that "Sally has only one requirement" could just as plausibly mean that Sally will take all comers as long as they fulfill her single requirement).
But it doesn't explain how we can know that an action performed with the wrong motivations might also fulfill a moral obligation—which is, I think, the problem that most people encounter in this question. How do we know that such a scenario is even possible? Perhaps no action can be both wrongly motivated and yet fulfill a moral obligation (even now I'm imagining an LR question, disguised in suitably convoluted language, that plays upon such a potential incongruity).
That is the aforementioned inference that the question seems to be requiring us, recklessly, to make. Evidenced by this question, making such an illicit inference is allowable by LSAC. But on an identify-the-disagreement question, it seems to me that the points of disagreement should be fairly explicit, so that identifying them does not require making a leap of presumption.
It's made even more artificially, and thus cheaply, difficult because Dan and Carrie are talking about highly abstract, hypothetical notions. If they were talking about actual things of the concrete world (for example flavors of cake, and whether the flavor of the icing is the only thing that can make a cake delicious, or on the other hand if it's not the icing but the flavor of the cake itself, and where we know that there exist, in fact, different flavors of icing and cake; complete the analogy according to your imagination) the inferential leap this question induces would be reasonably supportable. But as it is—I am repeating myself here, and that's fine—do we even know if it's possible for an action to be wrongly motivated and yet morally obligated. I don't know that, and in evaluating (D) I wasn't prepared to presume that Carrie believed it; it seemed like a trap, in fact.