The percentage of correct questions is not tied directly to the score - on either test, you can get more wrong but get a higher score and vice versa. What matters is the difficulty level of the questions and the patterns as you move through the test section.
For instance, I could do amazingly well for the first 27 questions in quant (and be up at 99th percentile) and then get the last 10 questions wrong (because, say, I ran out of time), and my score would tank. My score isn't an average across all 37 questions. My score is the level that I'm at when the test finishes - and 10 Qs wrong in a row at the end will pull me well down below the 99th percentile.
That's an extreme case obviously, but I'm just trying to illustrate the point that you can't assess the scores based on the # correct or percentage correct - that's not how the test works.
The 15-17 min early finish is a really big deal. That means that you very likely missed some questions that you probably did know how to do, but you made careless mistakes because you were rushing (and fatigued). Then, that would lead to you being offered lower-level questions, which you'd of course be more likely to get right - except you'd still have some careless mistakes even on those, because again you're rushing and tired. So it's actually not surprising at all that you might have gotten more right in the end but had a lower score.
The mental fatigue is a real problem for everyone. Read this:
http://www.manhattangmat.com/blog/index ... you-crazy/i am planning to give more tests on weekends and improve my sitting....
That's not really how you improve - not efficiently. CAT exams are really good for (a) figuring out where you're scoring right now, (b) practicing stamina (which is an issue in your case, yes), and (c) analyzing your strengths and weaknesses. The actual act of just taking the exam, though, is an incredibly inefficient way to improve - it'll take you forever. It's what you do with the test results
between tests that helps you to improve.
but i have lost faith in score given by different CATs apart from real GMAT.. i think the algorithm used is different....
It is absolutely the case that all practice tests use a different algorithm from GMATPrep - that's because the specific GMATPrep algorithm has never been publicly disclosed. And, in fact, there seem to be some differences even between GMATPrep and the real test (eg, experimental questions), though we don't know for sure because the real test algorithm has also never been disclosed.
We do know the algorithmic theory on which the test is based, though, and that's what companies have used to re-engineer the algorithm. These practice tests mimic the real test but, no, they are not exactly like the real thing. Just be aware that your feeling about the # correct leading to the different scores is actually
not evidence that the algorithms are so different that there's a problem. The tests seem to be performing correctly in the situation you described (given the info that you told me).