Discussion:
JBehave Stories Best Practices?
Hans Schwäbli
2013-11-22 13:43:43 UTC
Permalink
I would like to discuss best practices in using JBehave/BDD concerning
story writing. So I will assert some best practices now as a JBehave/BDD
beginner.

Some of them I discovered online (various sources). I left the
justifications.

How do you think about it? Do you have any additional best practices for
story writing with JBehave?

1. Stories may be dependent of each other. If so, they must declare
their dependencies.
2. Each story typically has somewhere between five and twenty scenarios,
each describing different examples of how that feature should behave in
different circumstances.
3. Each scenario must make sense and be able to be executed
independently of any other scenario. When writing a scenario, always assume
that it will run against the system in a default, blank state.
4. Each scenario typically has somewhere between 5 and 15 steps (not
considering step multiplification by example tables).
5. A scenario should consist of steps of both types: action ("Given" or
"When") and verification ("Then").
6. Each scenario, including example table, should not run longer than 3
minutes.
7. Steps of type "Given" and "When" should not perform a verification
and steps of type "Then" should not perform actions.
8. Step names should not contain GUI information but be expressed in a
client-neutral way wherever possible. Instead of "*Then a popup window
appears where a user can sign in*" it would be better to use "*Then the
user can sign in*". Only use GUI words in step names if you intend to
specifically test the GUI layer.
9. Step names should not contain technical details but be written in
business language terms.
10. Use declarative style for your steps instead of imperative (see the
example in "The Cucumber Book" page 91-93).
11. Choose an appropriate language. If your requirements specification
is in French for instance and most of the business analysts, programmers
and testers speak French, write the stories in that language.
12. Don't mix languages in stories.
13. Use comments sparingly in stories.
14. Avoid too detailed steps like "*When user enters street name*".
15. Don't use step aliases for different languages. Instead choose just
one language for all your stories.
16. Use step name aliases sparingly.
17. Prioritize your stories using meta information so that only high
priority stories can be executed if required.
Hans Schwäbli
2013-11-27 07:58:07 UTC
Permalink
I would especially like to discuss this issue:

*3. Each scenario must make sense and be able to be executed independently
of any other scenario. When writing a scenario, always assume that it will
run against the system in a default, blank state.*

I quoted that from "The Cucumber Book". It sounds good initially, but I am
not so sure about it. By the way, the system is nearly never in a "blank
state", only in the very beginning after the first rollout.

If this best practice is applied, it can cause too long story execution in
some environments. Each scenario has to create some data first (which can
be a lot) in order to perform the actual test.

The above mentioned best practice seems to make sense if you have control
over your test data in the database which the system under test (SUT)
accesses. Then you could create some basic test data set in the SUT for
various purposes and pick the ones in the stories from which you want to
start your test. So you could cherry pick some data where you can perform
some high level tests whichout first having to create the required data.

But if you have no control over that test data in the SUT, then you have to
create a lot of data in the scenarios first before you actually can perform
the actual test. This applies for instance if you have to use a copy of the
productive data as your test data. This data is created in a very complex
way with many subsystems, so there is no way to design a basic (common)
test data set for the tests.

So I thought that in this environment, where you have no control of the
test data set, it might be better that scenarios are not independent of
each in order to opmize story execution time and have less repetition of
data creation.

Maybe a solution would be a feature I have seen in Cucumber which is
similiar to a feature in JUnit. You can define a "Background" for all your
scenarios in Cubumber. This is a kind of test fixture or what you do in the
JUnit test method annotated with @BeforeClass or @Before. I could not
figure out if it behaves so that it is executed just once for all scenarios
or for each scenario. It would only be helpful for the problem which I
mentioned if it would be performed once for all scenarios (similar purpose
like @BeforeClass in JUnit).

What do you think about the problems I see with the best practice I
mentioned above and how would you solve it in a environment where you have
to use productive data as test data and have nearly no control over them?
Post by Hans Schwäbli
I would like to discuss best practices in using JBehave/BDD concerning
story writing. So I will assert some best practices now as a JBehave/BDD
beginner.
Some of them I discovered online (various sources). I left the
justifications.
How do you think about it? Do you have any additional best practices for
story writing with JBehave?
1. Stories may be dependent of each other. If so, they must declare
their dependencies.
2. Each story typically has somewhere between five and twenty
scenarios, each describing different examples of how that feature should
behave in different circumstances.
3. Each scenario must make sense and be able to be executed
independently of any other scenario. When writing a scenario, always assume
that it will run against the system in a default, blank state.
4. Each scenario typically has somewhere between 5 and 15 steps (not
considering step multiplification by example tables).
5. A scenario should consist of steps of both types: action ("Given"
or "When") and verification ("Then").
6. Each scenario, including example table, should not run longer than
3 minutes.
7. Steps of type "Given" and "When" should not perform a verification
and steps of type "Then" should not perform actions.
8. Step names should not contain GUI information but be expressed in a
client-neutral way wherever possible. Instead of "*Then a popup window
appears where a user can sign in*" it would be better to use "*Then
the user can sign in*". Only use GUI words in step names if you intend
to specifically test the GUI layer.
9. Step names should not contain technical details but be written in
business language terms.
10. Use declarative style for your steps instead of imperative (see
the example in "The Cucumber Book" page 91-93).
11. Choose an appropriate language. If your requirements specification
is in French for instance and most of the business analysts, programmers
and testers speak French, write the stories in that language.
12. Don't mix languages in stories.
13. Use comments sparingly in stories.
14. Avoid too detailed steps like "*When user enters street name*".
15. Don't use step aliases for different languages. Instead choose
just one language for all your stories.
16. Use step name aliases sparingly.
17. Prioritize your stories using meta information so that only high
priority stories can be executed if required.
Stephen de Vries
2013-11-27 08:19:12 UTC
Permalink
I’ve run into this same limitation and would also like to see a @BeforeClass type of functionality in JBehave. At the moment I’ve worked around this by breaking the cucumber recommendation and simply having the very first scenario in the story perform the data setup, and subsequent scenarios rely on the data created by the first one. It’s nasty and of course the scenarios cannot be executed independently.

Another ugly (maybe less ugly) hack would be to wrap all the setup steps into a single meta-step in the steps file. Then every scenario that depends on that step being executed could use it as a “Given”. E.g.:

@Given(“all data is setup”)
public void setupEverything() {
if (!isDataSetup) {
setupData1();
setupData2();
setupData3();
isDataSetup = true;
}
}

A related reason to use this type of functionality is because there’s no way to continue running a scenario if one of the steps fail. In my case, I want to test that a number of HTTP security headers are set on the site, so I have 1 scenario which looks like this:

Given a browser configured to use an intercepting proxy
And the proxy logs are cleared
When the secure base Url for the application is accessed
And the first HTTP request-response is saved
Then the X-Frame-Options header is either SAMEORIGIN or DENY
And the X-XSS-Protection header contains the value: 1; mode=block
And the Strict-Transport-Security header is set
And the Access-Control-Allow-Origin header must not be: *
And the X-Content-Type-Options header contains the value: nosniff

Ideally, if one of those “then” conditions fails, I’d like JBehave to continue so that everyone knows what are the total headers that are lacking. But since this isn’t supported, the recommended approach is to break this scenario up into 5 separate scenarios. But then I don’t want to do all the setup stuff 5 times, just once, hence the need for @BeforeClass again :)


regards,
Stephen
3. Each scenario must make sense and be able to be executed independently of any other scenario. When writing a scenario, always assume that it will run against the system in a default, blank state.
I quoted that from "The Cucumber Book". It sounds good initially, but I am not so sure about it. By the way, the system is nearly never in a "blank state", only in the very beginning after the first rollout.
If this best practice is applied, it can cause too long story execution in some environments. Each scenario has to create some data first (which can be a lot) in order to perform the actual test.
The above mentioned best practice seems to make sense if you have control over your test data in the database which the system under test (SUT) accesses. Then you could create some basic test data set in the SUT for various purposes and pick the ones in the stories from which you want to start your test. So you could cherry pick some data where you can perform some high level tests whichout first having to create the required data.
But if you have no control over that test data in the SUT, then you have to create a lot of data in the scenarios first before you actually can perform the actual test. This applies for instance if you have to use a copy of the productive data as your test data. This data is created in a very complex way with many subsystems, so there is no way to design a basic (common) test data set for the tests.
So I thought that in this environment, where you have no control of the test data set, it might be better that scenarios are not independent of each in order to opmize story execution time and have less repetition of data creation.
What do you think about the problems I see with the best practice I mentioned above and how would you solve it in a environment where you have to use productive data as test data and have nearly no control over them?
I would like to discuss best practices in using JBehave/BDD concerning story writing. So I will assert some best practices now as a JBehave/BDD beginner.
Some of them I discovered online (various sources). I left the justifications.
How do you think about it? Do you have any additional best practices for story writing with JBehave?
Stories may be dependent of each other. If so, they must declare their dependencies.
Each story typically has somewhere between five and twenty scenarios, each describing different examples of how that feature should behave in different circumstances.
Each scenario must make sense and be able to be executed independently of any other scenario. When writing a scenario, always assume that it will run against the system in a default, blank state.
Each scenario typically has somewhere between 5 and 15 steps (not considering step multiplification by example tables).
A scenario should consist of steps of both types: action ("Given" or "When") and verification ("Then").
Each scenario, including example table, should not run longer than 3 minutes.
Steps of type "Given" and "When" should not perform a verification and steps of type "Then" should not perform actions.
Step names should not contain GUI information but be expressed in a client-neutral way wherever possible. Instead of "Then a popup window appears where a user can sign in" it would be better to use "Then the user can sign in". Only use GUI words in step names if you intend to specifically test the GUI layer.
Step names should not contain technical details but be written in business language terms.
Use declarative style for your steps instead of imperative (see the example in "The Cucumber Book" page 91-93).
Choose an appropriate language. If your requirements specification is in French for instance and most of the business analysts, programmers and testers speak French, write the stories in that language.
Don't mix languages in stories.
Use comments sparingly in stories.
Avoid too detailed steps like "When user enters street name".
Don't use step aliases for different languages. Instead choose just one language for all your stories.
Use step name aliases sparingly.
Prioritize your stories using meta information so that only high priority stories can be executed if required.
Mauro Talevi
2013-11-27 12:15:38 UTC
Permalink
Hi Hans,

thanks for starting this discussion. It is rather useful.

I tend to agree with most of the points below but not all.

Notably, I think stories should be independently executable, declaring
via GivenStories all the preconditions they need. Scenarios are not
necessarily independent and crucially will not always run against a
blank state. That works for simple demo scenarios, but not for complex
testing strategies. A scenario should declare its state and
pre-condition (again via GivenStories, possibly selecting one specific
scenario to depend on, or with the Lifecycle Before) when necessary
(e.g. you could reset it), but it may also depend on the state of the
previous scenario.

Also, with regard to point 6, imposing an arbitrary time-limit on a
scenario execution is not a priori recommendable. True, one needs to
be aware of time issues because if execution takes too long it will not
be performed as often as it should, but the time considerations are
linked to the nature of the system under test. Some scenarios will run
for longer than a few minutes. A better solution is to structure the
running of stories in parallel when possible.

If you want, you could start a new doc page contribution that we can
evolve over time.

Feel free to create a JIRA issue and provide a pull request to a new
page in
https://github.com/jbehave/jbehave-core/tree/master/distribution/src/site/content

Cheers
/3. Each scenario must make sense and be able to be executed
independently of any other scenario. When writing a scenario,
always assume that it will run against the system in a default,
blank state./
I quoted that from "The Cucumber Book". It sounds good initially, but
I am not so sure about it. By the way, the system is nearly never in a
"blank state", only in the very beginning after the first rollout.
If this best practice is applied, it can cause too long story
execution in some environments. Each scenario has to create some data
first (which can be a lot) in order to perform the actual test.
The above mentioned best practice seems to make sense if you have
control over your test data in the database which the system under
test (SUT) accesses. Then you could create some basic test data set in
the SUT for various purposes and pick the ones in the stories from
which you want to start your test. So you could cherry pick some data
where you can perform some high level tests whichout first having to
create the required data.
But if you have no control over that test data in the SUT, then you
have to create a lot of data in the scenarios first before you
actually can perform the actual test. This applies for instance if you
have to use a copy of the productive data as your test data. This data
is created in a very complex way with many subsystems, so there is
no way to design a basic (common) test data set for the tests.
So I thought that in this environment, where you have no control of
the test data set, it might be better that scenarios are not
independent of each in order to opmize story execution time and have
less repetition of data creation.
Maybe a solution would be a feature I have seen in Cucumber which is
similiar to a feature in JUnit. You can define a "Background" for all
your scenarios in Cubumber. This is a kind of test fixture or what you
could not figure out if it behaves so that it is executed just once
for all scenarios or for each scenario. It would only be helpful for
the problem which I mentioned if it would be performed once for all
What do you think about the problems I see with the best practice I
mentioned above and how would you solve it in a environment where you
have to use productive data as test data and have nearly no control
over them?
I would like to discuss best practices in using JBehave/BDD
concerning story writing. So I will assert some best practices now
as a JBehave/BDD beginner.
Some of them I discovered online (various sources). I left the
justifications.
How do you think about it? Do you have any additional best
practices for story writing with JBehave?
1. Stories may be dependent of each other. If so, they must
declare their dependencies.
2. Each story typically has somewhere between five and twenty
scenarios, each describing different examples of how that
feature should behave in different circumstances.
3. Each scenario must make sense and be able to be executed
independently of any other scenario. When writing a scenario,
always assume that it will run against the system in a
default, blank state.
4. Each scenario typically has somewhere between 5 and 15 steps
(not considering step multiplification by example tables).
5. A scenario should consist of steps of both types: action
("Given" or "When") and verification ("Then").
6. Each scenario, including example table, should not run longer
than 3 minutes.
7. Steps of type "Given" and "When" should not perform a
verification and steps of type "Then" should not perform actions.
8. Step names should not contain GUI information but be expressed
in a client-neutral way wherever possible. Instead of "/*Then*
a popup window appears where a user can sign in/" it would be
better to use "/*Then* the user can sign in/". Only use GUI
words in step names if you intend to specifically test the GUI
layer.
9. Step names should not contain technical details but be written
in business language terms.
10. Use declarative style for your steps instead of imperative
(see the example in "The Cucumber Book" page 91-93).
11. Choose an appropriate language. If your requirements
specification is in French for instance and most of the
business analysts, programmers and testers speak French, write
the stories in that language.
12. Don't mix languages in stories.
13. Use comments sparingly in stories.
14. Avoid too detailed steps like "/*When* user enters street name/".
15. Don't use step aliases for different languages. Instead choose
just one language for all your stories.
16. Use step name aliases sparingly.
17. Prioritize your stories using meta information so that only
high priority stories can be executed if required.
Hans Schwäbli
2013-12-04 11:54:35 UTC
Permalink
Thank you for your responses. I don't know if the overhead of writing HTML
documentation (no Wiki exists instead) is too large for me. I will think
over it.

Currently it looks like this (just a draft to be discussed):

General:

- Use comments sparingly.
- Choose an appropriate language. If your requirements specification is
in French for instance and most of the business analysts, programmers and
testers speak French, write the stories in that language.
- Don't mix languages.

Stories:

- Stories may be dependent of each other. If so, they must declare their
dependencies in a machine executable way. When writing a story, always
assume that it will run against the system in a default, blank state.
- Stories should be repeatable, without having to clean up data manually
before it can be repeated for instance.
- A story typically has somewhere between five and twenty scenarios,
each describing different examples of how that feature should behave in
different circumstances.
- Prioritize your stories using meta information so that only high
priority stories can be executed if required.
- Categorize your stories.

Scenarios:

- Each scenario may be dependent by a previous scenario of the same
story.
- Each scenario typically has somewhere between 5 and 15 steps (not
considering step multiplication by example tables).
- A scenario should consist of steps of both types: action ("Given" or
"When") and verification ("Then").
- Each scenario, including example table, should not take too much time
to finish on a fast environment.

Steps:

- Simple Steps (not composed ones) of type "Given" and "When" should not
perform a verification and steps of type "Then" should not perform actions.
- Step names should not contain GUI information but be expressed in a
client-neutral way wherever possible. Instead of "Then a popup window
appears where a user can sign in" it would be better to use "Then the user
can sign in". Only use GUI words in step names if you intend to
specifically test the GUI layer.
- Step names should not contain technical details but be written in
business language terms.
- Use declarative style for your steps instead of imperative (see the
example in "The Cucumber Book" page 91-93).
- Avoid too detailed steps like "When user enters street name" if you
don’t intend to test the UI interaction.
- Don't use step aliases for different languages. Instead choose just
one language for all your stories.
- Use step name aliases sparingly.
Post by Mauro Talevi
Hi Hans,
thanks for starting this discussion. It is rather useful.
I tend to agree with most of the points below but not all.
Notably, I think stories should be independently executable, declaring via
GivenStories all the preconditions they need. Scenarios are not
necessarily independent and crucially will not always run against a blank
state. That works for simple demo scenarios, but not for complex testing
strategies. A scenario should declare its state and pre-condition (again
via GivenStories, possibly selecting one specific scenario to depend on, or
with the Lifecycle Before) when necessary (e.g. you could reset it), but it
may also depend on the state of the previous scenario.
Also, with regard to point 6, imposing an arbitrary time-limit on a
scenario execution is not a priori recommendable. True, one needs to be
aware of time issues because if execution takes too long it will not be
performed as often as it should, but the time considerations are linked to
the nature of the system under test. Some scenarios will run for longer
than a few minutes. A better solution is to structure the running of
stories in parallel when possible.
If you want, you could start a new doc page contribution that we can
evolve over time.
Feel free to create a JIRA issue and provide a pull request to a new page
in
https://github.com/jbehave/jbehave-core/tree/master/distribution/src/site/content
Cheers
*3. Each scenario must make sense and be able to be executed independently
of any other scenario. When writing a scenario, always assume that it will
run against the system in a default, blank state.*
I quoted that from "The Cucumber Book". It sounds good initially, but I am
not so sure about it. By the way, the system is nearly never in a "blank
state", only in the very beginning after the first rollout.
If this best practice is applied, it can cause too long story execution in
some environments. Each scenario has to create some data first (which can
be a lot) in order to perform the actual test.
The above mentioned best practice seems to make sense if you have control
over your test data in the database which the system under test (SUT)
accesses. Then you could create some basic test data set in the SUT for
various purposes and pick the ones in the stories from which you want to
start your test. So you could cherry pick some data where you can perform
some high level tests whichout first having to create the required data.
But if you have no control over that test data in the SUT, then you have
to create a lot of data in the scenarios first before you actually can
perform the actual test. This applies for instance if you have to use a
copy of the productive data as your test data. This data is created in a
very complex way with many subsystems, so there is no way to design a basic
(common) test data set for the tests.
So I thought that in this environment, where you have no control of the
test data set, it might be better that scenarios are not independent of
each in order to opmize story execution time and have less repetition of
data creation.
Maybe a solution would be a feature I have seen in Cucumber which is
similiar to a feature in JUnit. You can define a "Background" for all your
scenarios in Cubumber. This is a kind of test fixture or what you do in the
figure out if it behaves so that it is executed just once for all scenarios
or for each scenario. It would only be helpful for the problem which I
mentioned if it would be performed once for all scenarios (similar purpose
What do you think about the problems I see with the best practice I
mentioned above and how would you solve it in a environment where you have
to use productive data as test data and have nearly no control over them?
Post by Hans Schwäbli
I would like to discuss best practices in using JBehave/BDD concerning
story writing. So I will assert some best practices now as a JBehave/BDD
beginner.
Some of them I discovered online (various sources). I left the
justifications.
How do you think about it? Do you have any additional best practices for
story writing with JBehave?
1. Stories may be dependent of each other. If so, they must declare
their dependencies.
2. Each story typically has somewhere between five and twenty
scenarios, each describing different examples of how that feature should
behave in different circumstances.
3. Each scenario must make sense and be able to be executed
independently of any other scenario. When writing a scenario, always assume
that it will run against the system in a default, blank state.
4. Each scenario typically has somewhere between 5 and 15 steps (not
considering step multiplification by example tables).
5. A scenario should consist of steps of both types: action ("Given"
or "When") and verification ("Then").
6. Each scenario, including example table, should not run longer
than 3 minutes.
7. Steps of type "Given" and "When" should not perform a
verification and steps of type "Then" should not perform actions.
8. Step names should not contain GUI information but be expressed in
a client-neutral way wherever possible. Instead of "*Then a popup
window appears where a user can sign in*" it would be better to use "*Then
the user can sign in*". Only use GUI words in step names if you
intend to specifically test the GUI layer.
9. Step names should not contain technical details but be written in
business language terms.
10. Use declarative style for your steps instead of imperative (see
the example in "The Cucumber Book" page 91-93).
11. Choose an appropriate language. If your requirements
specification is in French for instance and most of the business analysts,
programmers and testers speak French, write the stories in that language.
12. Don't mix languages in stories.
13. Use comments sparingly in stories.
14. Avoid too detailed steps like "*When user enters street name*".
15. Don't use step aliases for different languages. Instead choose
just one language for all your stories.
16. Use step name aliases sparingly.
17. Prioritize your stories using meta information so that only high
priority stories can be executed if required.
Loading...