Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] add minor branding guidelines to CONTRIBUTING.md #21495

Merged
merged 20 commits into from Sep 3, 2023

Conversation

rogerluan
Copy link
Member

@rogerluan rogerluan commented Sep 1, 2023

Checklist

Motivation and Context

This is already enforced by our CI checks:

fastlane/fastlane/Fastfile

Lines 585 to 607 in ab9fb32

desc "Ensure the correct formatting for the fastlane tools"
private_lane :ensure_tool_name_formatting do
UI.message("🕗 Verifying tool name formatting...")
require 'fastlane/tools'
errors = []
Dir.chdir("..") do
Dir["**/*.md"].each do |path|
content = File.read(path)
Fastlane::TOOLS.each do |tool|
errors << "Use _#{tool}_ instead of `#{tool}` to mention a tool in the docs in '#{path}'" if content.include?("`#{tool}`")
errors << "Use _#{tool}_ instead of `_#{tool}_` to mention a tool in the docs in '#{path}'" if content.include?("`_#{tool}_`")
errors << "Use [_#{tool}_] instead of [#{tool}] to mention a tool in the docs in '#{path}'" if content.include?("[#{tool}]")
errors << "Use <em>#{tool}<em> instead of <code>#{tool}</code> to mention a tool in the docs in '#{path}'" if content.include?("<code>#{tool}</code>")
if content.include?("_#{tool.to_s.capitalize}_") || content.include?("`#{tool.to_s.capitalize}`")
errors << "fastlane tools have to be formatted in lower case: #{tool} in '#{path}'"
end
end
end
end
errors.each { |a| UI.error(a) }
UI.user_error!("Invalid formatting of one of the fastlane tools") unless errors.empty?
UI.success("✅ fastlane tools formatting is correct")
end

But it's not documented anywhere (and I also I believe that CI check is not very thorough). This PR just formalizes those guidelines, and it's a starting point for us to improve/shape it better in the future (should there be any other additions).

I wasn't sure where to place this section. Idk if we'd rather have yet another markdown file, like BRANDING.md or something similar? 🤷‍♂️

Testing Steps

Looks like this formatted:

image

You can preview it here: https://github.com/fastlane/fastlane/blob/rogerluan-branding/CONTRIBUTING.md

@rogerluan
Copy link
Member Author

Heh, CI checks are working!

image

Looks like we're gonna need to add an exception to the CI checks, or force merge this PR 👀 any preference? If implementing the exception, it'd probably be an exception on the entire file... Should we move these guidelines to a BRANDING.md or something? 🤷

@AliSoftware
Copy link
Contributor

AliSoftware commented Sep 1, 2023

I like that it's in CONTRIBUTING.md because:

  • CONTRIBUTING.md is a standard in the OSS community, but I don't think BRANDING.md is (at least I haven't seen any OSS repo with such a file)?
  • Which also means that it's less likely that people are gonna miss it, because people are more likely to read CONTRIBUTING.md but less likely to think about reading BRANDING.md which is not a common one
  • Finally, I think nowadays GitHub has some integration with CONTRIBUTING.md, like have the UI automatically suggest new contributors to read the file before creating their first pull request, or detect if an already existing contributor returns back to the project and open a PR after the last time the CONTRIBUTING.md file was changed (i.e. if that's their first PR since the guidelines changes)

So for all those reasons, I'd suggest to keep it in CONTRIBUTING.md like you did there.


I wonder if we might be able to be clever about the CI check to exclude just that section from the markdown when checking, instead of having to disable that the check of the whole CONTRIBUTING.md file. Or maybe detect if the file is CONTRIBUTING.md AND the line being checked starts with - ❌ , then ignore it, something like that?

For example, if we include a content.gsub(/^\s*- ❌.*$/, '') if path == 'CONTRIBUTING.md' # make an exception for branding guidelines section of that file around this code:

fastlane/fastlane/Fastfile

Lines 591 to 593 in ab9fb32

Dir["**/*.md"].each do |path|
content = File.read(path)
Fastlane::TOOLS.each do |tool|

@rogerluan
Copy link
Member Author

Fully agree with all your points @AliSoftware 😊

I just pushed some changes lmk if those are good enough 😃

Before and after enabling the "skip" for the branding guidelines:

image

fastlane/Fastfile Outdated Show resolved Hide resolved
line.scan(/([_ `"])(#{Regexp.escape(tool.to_s)})\1/i) do |match|
matched_tool = match[1]
tool_is_written_wrong_case = matched_tool != matched_tool.downcase
looks_like_an_env_var = line.match?(/[A-Z_]+_#{matched_tool}_[A-Z_]+/)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That won't work for env vars that start or ends with the tool name, like MATCH_FILE or UPDATE_FASTLANE though.

Doing line.match? within the line.scan do … could also lead to false positives, in case a line contains both a part that matches the regex and one that contains an env var. Like:

You can configure Fastlane output using `FASTLANE_SKIP_CHANGELOG`.

In this case the first iteration of the do…end block will match the first Fastlane in that sentence, but looks_like_an_env_var will be true because it will match the FASTLANE_SKIP_CHANGELOG at the end of that line. And so you will end up not warning about the first match having a wrong case.

Finally, I think we should consider only allowing env vars that are ALL_CAPS in our exception of this.

In the end, I think the best way is probably to keep the first regex, but also capture any optional [A-Z_]* before and after within that Regexp, and then test the value of those in your scan block. Something probably like this:

line.scan(/([A-Z_]*)([_ `"])(#{Regexp.escape(tool.to_s)})\2([A-Z_]*)/i) do |prefix, delim, tool_name, suffix|
  wrong_case = tool_name != tool.to_s.downcase
  looks_like_an_env_var = (tool_name == tool_name.upcase) && delim == '_' && (!prefix.empty? || !suffix.emtpy?)
  errors << "…" if !looks_like_an_env_var && wrong_case
end

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work dude! amaze

Perfect solution, I didn't even know we could use that syntax with the scan block (multiple objects, one for each match) 😲 TIL!

Copy link
Collaborator

@getaaron getaaron left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few lil suggestions, take em or leave em. Looks great and I love the tests! 😍

CONTRIBUTING.md Outdated Show resolved Hide resolved
CONTRIBUTING.md Outdated Show resolved Hide resolved
@rogerluan
Copy link
Member Author

@getaaron your copy suggestions were much better, thanks! Applied both 😊
@AliSoftware I applied your algo changes and also added some "real" unit tests. I wanted to use rspec and all but I couldn't find a practical way to do it? So I just added a fixture and a basic method in the Fastfile itself 🤷‍♂️ this way we can ensure future iterations won't break the hard work we (you!) did in this PR 😂 lmk if you think there's a better place to place these tests :)

Copy link
Contributor

@AliSoftware AliSoftware left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of having the test for the lane be implemented that way instead of a proper rspec, but I'm also not sure if there's a better solution, short of implementing the lane as a method in the fastlane gem's lib/ code itself in order to have a spec for it (and then have the lane in the Fastfile call that method)… 🤔

fastlane/Fastfile Outdated Show resolved Hide resolved
@rogerluan
Copy link
Member Author

short of implementing the lane as a method in the fastlane gem's lib/ code itself in order to have a spec for it (and then have the lane in the Fastfile call that method)

I thought about this before but it's not ideal since it shouldn't be exposed publicly to fastlane users, it's just an internal tool 😞 do you know of any way to do it like this but not expose it to the lib? 🤔

@AliSoftware
Copy link
Contributor

I thought about this before but it's not ideal since it shouldn't be exposed publicly to fastlane users, it's just an internal tool 😞 do you know of any way to do it like this but not expose it to the lib? 🤔

What about extracting the find_tool_name_formatting_errors method into a dedicated .rb file under helper/, and then then write spec against that? A bit like how it's done for fastlane/helper/plugin_scores_helper.rb and fastlane/spec/helper/plugin_scores_helper_spec.rb?
And then require_relative that file from the Fastfile to call the find_tool_name_formatting_errors method from the lane?

private_lane :test_tool_name_formatting_ensurance_lane do
content = File.read("spec/fixtures/fastfiles/tool_name_formatting.txt")
errors = find_tool_name_formatting_errors(content: content, path: "CONTRIBUTING.md")
errors.select! { |e| !e.end_with?("❌") }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue with this is that you're only checking if lines not ending with ❌ are not detected at errors (actually, talk about double-negation… had it wrong the first time I read that code and thought it was doing the opposite at first…), but you're not checking if all the lines ending with ✅ are detected as errors.

If you rewrite this as a spec (as suggested in my top-level comment from a couple of minutes ago), you should expect(errors).to equal(the_exact_expected_list), as opposed to do it similar to what you did it, only expecting that all errors end with ✅ but without checking that you didn't miss one of them in the original fixture.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your suggestion for helpers (similar to plugins) makes sense! It's literally the only file we have in the helper folder, nice 🤓 so this can definitely be moved to rspecs and proper unit tests be written, but can you also help me understand what you meant in this message you sent above? I think the tests are covering all lines, whether they're successful or not 🤔 We do check if the ones with ✅ succeed (and they do), and we also check if the ones with ❌ fail (and they do). What's missing? (just trying to understand here, if we were to keep it, although we won't 😇 )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well another way of seeing this is that there are 4 categories of lines:

  1. Lines that should be considered errors, and are properly flagged by find_tool_name_formatting_errors as such
  2. Lines that should be valid, and are properly not flagged by find_tool_name_formatting_errors as errors
  3. Lines that should be considered errors, but are not properly detected and returned by find_tool_name_formatting_errors
  4. Lines that should be valid, but are incorrectly flagged as errors by find_tool_name_formatting_errors

What we want is to ensure not only that cases 1 and 2 happen (i.e. our implementation work when it's supposed to) but also that cases 3 and 4 don't happen (i.e. that our implementation doesn't create neither false positives or false negatives)

In your code above, you only look at if the return value of find_tool_name_formatting_errors (aka errors), so you're only considering cases 1 and 4 (i.e. things that have been reported as errors). And especially then you're raising a UI.error if case 4 (the thing that was raised as errors shouldn't have been) happens.

But you're not checking case 3 at all, i.e. lines that should be considered as errors and should have been returned in errors when you called find_tool_name_formatting_errors, but weren't reported (in case our implementation of find_tool_name_formatting_errors would be incorrect — which is what we want to test here after all)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! 🎯

@rogerluan
Copy link
Member Author

Lemme know if what I implemented is what you had in mind @AliSoftware 😃 alternatively we could write lots of describe / it pairs, it'd provide more information about the test cases (why it should succeed, why it should fail, with more developer comments), but at the same time it sounds like over engineering haha and it would make adding more cases in the future somewhat cumbersome. lmk if you think this is ok as is :)

fastlane/Fastfile Outdated Show resolved Hide resolved
Comment on lines 11 to 15
expected_errors = [
"fastlane tools have to be formatted in lowercase: fastlane in 'CONTRIBUTING.md:19': _FastLane_ makes code signing management easy. ❌",
"fastlane tools have to be formatted in lowercase: fastlane in 'CONTRIBUTING.md:23': A sentence that contains a _FastLane_ keyword, but also an env var: `FASTLANE_SKIP_CHANGELOG`. ❌"
]
expect(@helper.find_tool_name_formatting_errors).to match_array(expected_errors)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep exactly what I had in mind. Because this not only ensures that what we expect to be reported as errors are indeed reported as errors, but also checks that what we don't expect to be reported as errors and expect to be valid exceptions… isn't accidentally reported as errors either. So this should cover all 4 cases I mentioned above.

Btw, now that we have this implemented as a spec, I think it's worth removing the ❌ and ✅ at the end of each lines of your fixtures now, as they are not a needed hack anymore, and could accidentally mislead the thing being tested (e.g. imagine if our implementation of find_tool_name_formatting_errors didn't check if the lines of CONTRIBUTING.md started with "- ❌", but instead our implementation tested if include?("❌")… we don't want those emojis at end of line to mud the waters here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about removing them but ended up leaving them (hence why I reworded the header of the fixture file), because I think it's still useful when we need to update this fixture in the future. I think the header explains all that so there's no "accidentally misleading" I think 🤔 unless devs don't read it. But you have a good point nonetheless, I'll remove them

@rogerluan
Copy link
Member Author

Interesting, AppVeyor is the only env where this is crashing:

image

All others, which run on linux and macOS, in diff Ruby versions, they're all passing 🤔

Not sure if I should add force_encoding("UTF-8") 🤔 weird issue to begin with

@AliSoftware
Copy link
Contributor

AliSoftware commented Sep 3, 2023

I think I you replace content = File.read(path) with content = File.read(path, mode: 'rb:BOM|UTF-8') that should solve the issue.

That being said, that made me realize that it would be better in terms of memory-footprint to use File.readlines, which reads the file one by one, rather that load the entire file in memory via File.read only to split("\n") and iterate on each line.

So I think it would be better to:

  • Only pass the path in your initialize but not the content anymore
  • Switch your loops between the one that loops on tools and the one looping on lines
  • Replace the loop on lines with File.read lines(path, mode: 'rb:BOM|UTF-8').each |line|

@AliSoftware
Copy link
Contributor

@rogerluan I took the liberty to push 2 commits to implement my suggestions above (using File.readlines + using mode: 'rb:BOM|UTF-8' to indicate the encoding + running scan on " #{line} " to be able to detect formatting errors at start of line). Let me know what you think.

@rogerluan
Copy link
Member Author

I don't mind at all, thanks for applying these @AliSoftware ! Great catch with inverting the loops, it's definitely more efficient this way (specially wrt the readlines method). Also your 2nd idea (wrapping the line around spaces) was quite neat! 👏

I think we can move forward with this PR now! 🤩

@rogerluan
Copy link
Member Author

Also thanks for pushing the boundaries here 😉 we're in a much better place now, now just with some formalized branding guidelines, but better tools, test coverage, and we even made our docs more consistent 🚀

@AliSoftware
Copy link
Contributor

Teamwork at its best 🙂

I'll let you the honour of merging the PR if you're all good with it now 😃

Copy link
Member Author

@rogerluan rogerluan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed again 👍 LGTM!

@rogerluan rogerluan merged commit 8c1fc73 into master Sep 3, 2023
14 checks passed
@rogerluan rogerluan deleted the rogerluan-branding branch September 3, 2023 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants