Bug 59152 - Thread Group: Change "Action to be taken after a Sample Error" value from "Continue" to "Start Next thread loop"
Summary: Thread Group: Change "Action to be taken after a Sample Error" value from "Co...
Status: RESOLVED WONTFIX
Alias: None
Product: JMeter
Classification: Unclassified
Component: Main (show other bugs)
Version: 2.13
Hardware: All All
: P2 enhancement (vote)
Target Milestone: ---
Assignee: JMeter issues mailing list
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2016-03-09 12:19 UTC by Antonio Gomes Rodrigues
Modified: 2016-05-24 14:37 UTC (History)
2 users (show)



Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Antonio Gomes Rodrigues 2016-03-09 12:19:38 UTC
Hi,

This patch change default Thread Group -> Action to be taken after a Sample Error value from "Continue" to "Start Next thread loop"

I think it's better to start next thread loop instead of continuing without take into account the error because continue if there is an error makes no sense (it's not realistic)

Antonio
Comment 1 Antonio Gomes Rodrigues 2016-03-09 12:26:14 UTC
PR 161 commited

Antonio
Comment 2 Philippe Mouawad 2016-03-09 21:09:43 UTC
Date: Wed Mar  9 21:09:24 2016
New Revision: 1734313

URL: http://svn.apache.org/viewvc?rev=1734313&view=rev
Log:
Bug 59152 - Thread Group: Change "Action to be taken after a Sample Error" value from "Continue" to "Start Next thread loop"
#resolve #161
https://github.com/apache/jmeter/pull/161
Bugzilla Id: 59152

Modified:
    jmeter/trunk/bin/templates/recording-with-think-time.jmx
    jmeter/trunk/bin/templates/recording.jmx
    jmeter/trunk/src/core/org/apache/jmeter/threads/gui/AbstractThreadGroupGui.java
    jmeter/trunk/xdocs/changes.xml
Comment 3 Sebb 2016-05-03 20:56:35 UTC
I've just fallen foul of this change.

I set up a test which deliberately had a couple of failing samples, but only one of them was run.

It took me quite a while to work out what had happened.

I think this change is going to be a nuisance for others as well and should be reverted.
Comment 4 Philippe Mouawad 2016-05-03 21:05:59 UTC
(In reply to Sebb from comment #3)
> I've just fallen foul of this change.
> 
> I set up a test which deliberately had a couple of failing samples, but only
> one of them was run.
> 
> It took me quite a while to work out what had happened.
> 
> I think this change is going to be a nuisance for others as well and should
> be reverted.

I don't share this opinion and agree with initial motivations of this change.
In my experience (webapps testing), I always set the value to what it is after the change.

Since change is mentioned in Incompatible changes, it is not a problem for me.

But I can be wrong.
What is the Use case of your plan ? 
Thanks
Comment 5 Sebb 2016-05-03 21:16:35 UTC
(In reply to Philippe Mouawad from comment #4)
> (In reply to Sebb from comment #3)
> > I've just fallen foul of this change.
> > 
> > I set up a test which deliberately had a couple of failing samples, but only
> > one of them was run.
> > 
> > It took me quite a while to work out what had happened.
> > 
> > I think this change is going to be a nuisance for others as well and should
> > be reverted.
> 
> I don't share this opinion and agree with initial motivations of this change.
> In my experience (webapps testing), I always set the value to what it is
> after the change.
> 
> Since change is mentioned in Incompatible changes, it is not a problem for
> me.

It is one line amongst lots. 
Even though I know it's there the consequences are not immediately obvious.

> But I can be wrong.
> What is the Use case of your plan ? 

I was testing the changes for Content-Encoding and wanted to see how error pages were handled.

I quite often want to use invalid URL or failed samples.

Also when one is developing a test plan, it's easy to make mistakes. With the new default the first mistake stops the test run, so it can take a long time to find all the mistakes. 

I think this will penalise people who are new to JMeter or who use it infrequently.
Comment 6 Milamber 2016-05-04 06:52:46 UTC
Hello,

I think that the revert will be a good thing. I made some load tests with this new default option. My experience is this new default mask a bad load test.
I explain:
If you are a plan test like this:
1/ Login Form
2/ Login (id/pass)
3/ Home
4/ Search
5/ Results
etc.

With this default option, if the 2/Login failed (target server failed to login randomly), you see only some Errors (during the load test) on 2/Login, the other pages have 0 error (because return to 1/ after error)

After, when you show the graph results : all is good
If you show results in the summary listener, you see results like this:
1/ Login form : 2000 samples
2/ Login (id/pass) : 2000 samples with 25% errors
3/ Home : 1500 samples with 0 errors
4/ Search : 1500 samples with 0 errors
etc.

It's very easy to conclude that this load test is successful (only 25% errors on 1 page), but in reality, this is a bad load test, because my target load is reduce to 25%, and the target server have been tested with only at 75% of the load.
I prefer the old option, which force me to stop a load test if the errors increase on all pages after a error on the login page.
Comment 7 Philippe Mouawad 2016-05-04 20:07:07 UTC
Hi,
Ok for me.
I don't want to delay release for this.

Who reverts it ?

Thanks
Regards
Philippe
Comment 8 Sebb 2016-05-04 20:33:31 UTC
I will revert it
Comment 9 Sebb 2016-05-04 20:42:08 UTC
Reverted:

URL: http://svn.apache.org/viewvc?rev=1742333&view=rev
Log:
Revert r1734313 so default action remains as 'Continue'
Bugzilla Id: 59152

Modified:
    jmeter/trunk/bin/templates/recording-with-think-time.jmx
    jmeter/trunk/bin/templates/recording.jmx
    jmeter/trunk/src/core/org/apache/jmeter/threads/gui/AbstractThreadGroupGui.java
    jmeter/trunk/xdocs/changes.xml
Comment 10 Vladimir Sitnikov 2016-05-24 12:12:37 UTC
Milamber>It's very easy to conclude that this load test is successful (only 25% errors on 1 page), but in reality, this is a bad load test, because my target load is reduce to 25%, and the target server have been tested with only at 75% of the load.

Technically speaking, "validation" of a test report should include not only "% of errors" validation, but "planned throughput vs actual throughput", "planned response times vs actual response times".


If a test is hard to setup (e.g. lots of steps), then it might be better to start new iteration to try one's best to achieve "throughput goal".

At the end of the day, real users do restart from scratch in case of failure.


Note: high failrate would be misleading, since it will show "consequence", while it makes much more sense in knowing "root cause". In that case, it would be good to restart after the first failure, and %error would show exactly the step that failed.
Comment 11 Sebb 2016-05-24 14:37:29 UTC
(In reply to Vladimir Sitnikov from comment #10)
> Milamber>It's very easy to conclude that this load test is successful (only
> 25% errors on 1 page), but in reality, this is a bad load test, because my
> target load is reduce to 25%, and the target server have been tested with
> only at 75% of the load.
> 
> Technically speaking, "validation" of a test report should include not only
> "% of errors" validation, but "planned throughput vs actual throughput",
> "planned response times vs actual response times".

It's also important to know whether any tests were skipped.
 
> 
> If a test is hard to setup (e.g. lots of steps), then it might be better to
> start new iteration to try one's best to achieve "throughput goal".
> 
> At the end of the day, real users do restart from scratch in case of failure.

More likely, they will redo the step that failed.
After a couple of such failures they will go away and try another time.

> 
> Note: high failrate would be misleading, since it will show "consequence",
> while it makes much more sense in knowing "root cause". In that case, it
> would be good to restart after the first failure, and %error would show
> exactly the step that failed.

That depends on the exact scenario. 
Some failures may not be fatal to the loop or the test, e.g. image download failure.

Only the test designer knows what the correct on error behaviour is, and that will vary between plans and parts of test plans.

I think the only sensible default is Continue on Error.

Partly because that is the original setting, and partly to ensure that the full test plan is exercised by default.