Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assert_cmd experience report #63

Open
matklad opened this issue Dec 4, 2018 · 18 comments
Open

assert_cmd experience report #63

matklad opened this issue Dec 4, 2018 · 18 comments

Comments

@matklad
Copy link

matklad commented Dec 4, 2018

Hi! Today, I've written a bunch of tests for https://github.com/ferrous-systems/cargo-review-deps/blob/3ba1523f3b2c2cfe807bc76f42bf972d0efdc113/tests/test_cli.rs.

Initially, I've started with assert_cmd, then I've switched to assert_cli and now I have "write my own minimal testing harness on top of std::process::Command" on the todo list. I'd like to document my reasoning behind these decisions. Note that these are just my personal preferences though!

The first (but very minor) issue with assert_cmd was it's prelude/extension traits based design. It makes for an API which reads well, but it is definitely a speed bump for the new users who want to understand what API surface is available to them. I think that long term such design might be an overall win, but it is a slight disadvantage when you learn the library.

The issue which fundamentally made me think "I'll stick with assert_cli for now" was the predicate-based design of assertions for stdout. It has two things which don't fit my style of testing. First, the same issue with prelude. The first example for .stdout starts with

extern crate assert_cmd;
extern crate predicates;

use assert_cmd::prelude::*;

use std::process::Command;
use predicates::prelude::*;

That's an uncomfortable for me amount of things I need to import to do "quick & dirty smoke testing".

The second thing is extremely personal: I just hate assertion libraries with a passion :) Especially fluent assertion ones :-) I've tried them many times (b/c they are extremely popular), but every time the conclusion was that they make me less productive.

When I write tests, I usually follow the following stages.

  • A simple test with assert!(a == b), without custom message. This the lowest-friction thing you can write, and making adding tests easy is super important.
  • The second stage happens when the test eventually fails. I see assert!(false) in the console, and go to the test and write an elaborate hand-crafted error message with all kind of useful information, to help me debug the test.
  • The third stage (which I call data-driven testing) happens when I have a bunch of tests with similar setups/asserts. What I do then is that I remove all the asserts from all the tests by introducing a special function fn do_test(input: Input, expected: Expected) which turns arrange and assert phases of a test into Data (some JSON-like pods, a builder DSL or just a string with some regex-based DSL). Internally, do_test does all kind of fancy validation of input and custom comparisons of expected and actual outcomes.

I feel that assertion libraries sit somewhere between 1 and 2 here: they are significantly more difficult to use then plain assertions, but are not as good error-message quality wise as hand-crafted messages. And they don't really help with 3, where you need some kind of domain-specific diffing. EDIT: what works great as a midpoint between 1 & 3 is pytest / better_assertions style of thing, which generates reasonable diff from the plain assert.

So, TL;DR, the reason I've switched to assert_cli is to be able to do .stdout().contains("substring").

The reason why I want to write my own harness instead of assert_cli has to do with mostly technical issues:

  • It doesn't handle current_dir nicely, so I have to write custom code anyway.
  • When stdout does not match, it doesn't show stderr, which usually contains the clue as to why stdout is wrong :)

I also want to import a couple of niceties from Cargo's harness. The API I'd love to have will look like this:

// split by space to get a list of args. Not general, but covers 99% of cases
cmd("review-deps diff rand:0.6.0 rand:0.6.1")  
    // *rutime* arguments can be handled with the usual builder pattern
    .arg("-d").arg(&tmp_dir) 
    // a builder for expectation, not too fluent and easily extendable (b/c this lives in the same crate)
    .stdout_contains("< version = \"0.6.1\"") 
    // An optional debug helper. If streaming is enabled, all output from subprocess goes to the terminal as well. Super-valuable for eprintln! debugging 
    // .stream()
    // the call that actually runs the thing.
    // on error, it uncomnditionally prints process status, stdout & stderr, and only then a specific assert message. 
    .status(101)
    // that's it, but I can imagine this returns something you can use for fine-grained checking of the output, so
    // .output() # to get std::process::Output
    // .text_output() # to get something isomorphic to `Output`, but with Strings instead of Vec<u8>.
@epage
Copy link
Contributor

epage commented Dec 4, 2018

The first (but very minor) issue with assert_cmd was it's prelude/extension traits based design. It makes for an API which reads well, but it is definitely a speed bump for the new users who want to understand what API surface is available to them. I think that long term such design might be an overall win, but it is a slight disadvantage when you learn the library.

I assume this was recent; after we did a major doc revamp to try to highlight all the pieces of assert_cmd / predicates. Was that helpful at all? Where did you feel the gap was remaining?

I'd love for us to be able to have Command and friends pulled into our docs so you can see what all extension traits exists on them.

btw one of the motivations for this approach was how limiting the old API was. There was talk of people wanting to use duct (and now also commandspec). Now the user can cache the binary location (assuming using escargot rather than your trick, see #57 for my analysis of that).

@matklad
Copy link
Author

matklad commented Dec 4, 2018

I assume this was recent; after we did a major doc revamp to try to highlight all the pieces of assert_cmd / predicates. Was that helpful at all? Where did you feel the gap was remaining?

Docs are awesome, yeah, but I prefer to familiarize myself with API by looking at the types and methods, rather then by reading the docs. That's why for me API's that export few concrete types on which you can call inherent methods work better.

btw one of the motivations for this approach was how limiting the old API was. There was talk of people wanting to use duct (and now also commandspec)

Yeah, I totally understand that! Another possible design here is to have a From<Command> for AssertCmd, where AssertCmd is a concrete type with all of the stuff.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

EDIT: what works great as a midpoint between 1 & 3 is pytest / better_assertions style of thing, which generates reasonable diff from the plain assert.

Do you have any thoughts in how we are currently short of what pytest does?

I had pytest in mind when designing predicates. It tries to make it easy to define a high level assertion so we have enough information to provide a lot of useful low level / intermediate details tailored to the application.

For example, assert_cmd uses IntoOutputPredicate to coerce things into predicates, including "string". So if you do a .assert().stdout("full string"), we'll automatically create a predicate that will show a string diff on failure.

For assert_fs, we do have assert-rs/predicates-rs#65 for making a predicates smarter.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

The second thing is extremely personal: I just hate assertion libraries with a passion :) Especially fluent assertion ones :-) I've tried them many times (b/c they are extremely popular), but every time the conclusion was that they make me less productive.
So, TL;DR, the reason I've switched to assert_cli is to be able to do .stdout().contains("substring").

I feel there is some concrete concerns with your three iterations of test implementations that could be elaborated on so I can understand why assert_clis fluent assertions are ok but not assert_cmds.

btw one of the big motivations for switching from built-in predicates to predicates was there were a lot of requests for different assertions on top of what existed, so I wanted to give it a try of having extensible predicates that could then be reused (increasing user familiarity) with other crates like assert_fs.

@matklad
Copy link
Author

matklad commented Dec 4, 2018

Do you have any thoughts in how we are currently short of what pytest does?

Heh, I bet we now can write a procedural macro which handles "show values of sub expressions" feature. I am not sure we can do "plugable renderers" thing without specialization: the way pytest works is that is has render diff(lhs, rhs, op) function which dynamically dispatches on the types of lhs rhs. To have such dispatch in Rust without too much boilerplate, I think we need specialization.

elaborated on so I can understand why assert_clis fluent assertions are ok

They are not exactly OK: one of the reasons why lean towards a "do it yourself" solution is that I'll be able to write my own assertions, cmd(...).stdout_json("..."). But they are easier for me to understand then predicates because they operate on concrete types and don't use generics.

btw one of the big motivations for switching from built-in predicates to predicates was there were a lot of requests for different assertions on top of what existed, so I wanted to give it a try of having extensible predicates that could then be reused (increasing user familiarity) with other crates like assert_fs.

Yeah, if you want open-world extensible, then such predicate-based solution is more-or less the sole design point. It's just that with my current style of testing, I prefer a closed world where I can write stdout_contains(...) rather then something based on generics, with associated cognitive load.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

Yeah, I totally understand that! Another possible design here is to have a From for AssertCmd, where AssertCmd is a concrete type with all of the stuff.

So assert_cmd falls into two parts

  • Setup
  • Assertions

The assertions are on a concrete type (Assert) and use a trait to convert from various concrete types to Assert. I didn't use from (1) so no ambiguous situations requiring turbofish and (2) for failable conversions.

Any thoughts in how this design is a step back from your ideal?

@epage
Copy link
Contributor

epage commented Dec 4, 2018

Yeah, if you want open-world extensible, then such predicate-based solution is more-or less the sole design point. It's just that with my current style of testing, I prefer a closed world where I can write stdout_contains(...) rather then something based on generics, with associated cognitive load.

Understandable. It is a downside to our current approach. My hope was through examples and documentation, we could avoid people having to worry about the details which is where I assume most of the cognitive load is.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

When stdout does not match, it doesn't show stderr, which usually contains the clue as to why stdout is wrong :)

This was a trade off between giving the information people need without overloading them. I am open to changing this.

I do like having something like

    // An optional debug helper. If streaming is enabled, all output from subprocess goes to the terminal as well. Super-valuable for eprintln! debugging 
    // .stream()

I tend to try to design my tests so a failure can give me the context I need without having to go back and change it. This is one reason I prefer macros over predicates is the failure points to the test rather than the library, avoiding the need to re-run with RUST_BACKTRACE.

For controlling outputting more information, an environment variable seems better than modifying the code.

Any thoughts on

  • ASSERT_CMD_DUMP or something to show all
  • .dump(bool) to show all
  • .dump(str) to check an environment variable to show all
  • Or another approach entirely?

@epage
Copy link
Contributor

epage commented Dec 4, 2018

They are not exactly OK: one of the reasons why lean towards a "do it yourself" solution is that I'll be able to write my own assertions, cmd(...).stdout_json("..."). But they are easier for me to understand then predicates because they operate on concrete types and don't use generics.

So let me see if I understand where the gap is with assert_cmd in trying to solve the 3rd iteration of test design.

It isn't that assert_cmd can't handle it, it just is the overhead of writing one-off logic with a generic API like predicates?

@matklad
Copy link
Author

matklad commented Dec 4, 2018

When stdout does not match, it doesn't show stderr, which usually contains the clue as to why stdout is wrong :)

The concrete example why this would be useful for me, from today's battle with travis.

I run diff -r --color=auto as a subprocess in my cargo-review-deps thing, and assert with .stdout_contains("< ..."). It worked locally, but on travis I got:

expected to find needle ```< ...````
haystack: ``````

which was rather unhelpful. When I added an additional .stderr.is(""), I've got

expected: ``````
actual: ```diff: unrecognized option --color=auto````

@matklad
Copy link
Author

matklad commented Dec 4, 2018

For controlling outputting more information, an environment variable seems better than modifying the code.

I think both won't harm? When running a single test from an IDE, sticking .stream() and hitting ctrl+r is easier then setting-up an env-var.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

I think both won't harm? When running a single test from an IDE, sticking .stream() and hitting ctrl+r is easier then setting-up an env-var.

I forget about the IDE use case :), thanks

And in general, thanks for taking the time to provide all this feedback. This has been a very helpful discussion!

@epage
Copy link
Contributor

epage commented Dec 4, 2018

Looks like in assert_cmd, I did go ahead with the approach of dumping everything.

On test failure, we Display self which includes "context" (command in, if we have it) as well as dumping Output which includes the code and a best-effort on stdout/stderr.

So I guess we don't need a way to dump more information.

@matklad
Copy link
Author

matklad commented Dec 4, 2018

It isn't that assert_cmd can't handle it, it just is the overhead of writing one-off logic with a generic API like predicates?

Exactly! An API which would work for me would be something like this:

  • an utility function to get the right binary, using escargot or the current_exe trick, so that I don't need to copy-paste it from Cargo
  • some helpers to construct Command easily (the "split string on whitesapce" is so useful! I wonder if we can write a proc macro to do cmd!(r#"my-bin --dist ${path.join("spam")}"#))
  • some helpers to exec command with possible streaming (this is tricky (as in, poll), see how Cargo does this) and with turning Output into a StringOutput
  • some dead-simple API to check basic stuff: execs with this status, with this substring.

That is, I'd love a low-level helpers which I can use to build by own API (so, extract result and do matching by hand, rather then plugging-in into the predicates infra), plus a dead simple not-extensible API to write a first couple of tests.

@epage
Copy link
Contributor

epage commented Dec 4, 2018

some helpers to construct Command easily (the "split string on whitesapce" is so useful! I wonder if we can write a proc macro to do cmd!(r#"my-bin --dist ${path.join("spam")}"#))

While I've not used it, commandspec might fit the bill.

some helpers to exec command with possible streaming (this is tricky (as in, poll), see how Cargo does this) and with turning Output into a StringOutput

I've not yet dug into these parts of cargo. Where should I start?

@matklad
Copy link
Author

matklad commented Dec 4, 2018

@epage
Copy link
Contributor

epage commented Mar 27, 2020

As an update, we switched from extension traits back to wrapping Command (though there are interop extension traits available). This was due to some API oddities when trying to make seeding stdin easier, see #85.

So quick summary

  • Concrete Command
  • Concrete Assert
  • Easy to do domain-specific assertions: you can call Assert::get_output for custom stuff
  • Dumps all command state on failure for full context when debugging
  • Command-string splitting
  • utility for current_exe trick
  • some dead-simple API to check basic stuff: execs with this status, with this substring.
  • some helpers to exec command with possible streaming (this is tricky (as in, poll), see how Cargo does this) and with turning Output into a StringOutput.

some helpers to exec command with possible streaming (this is tricky (as in, poll), see how Cargo does this) and with turning Output into a StringOutput

I'm not actually quite sure what the value add is for all the complex streaming logic. We did recently change to write to stdin in parallel to draining stdout and stderr to avoid buffering deadlocks but what cargo is doing is a lot more.

@pksunkara
Copy link

some dead-simple API to check basic stuff: execs with this status, with this substring.

I was trying this library out today and I was basically looking for something like this. It would really improve the experience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants