Skip to content

Commit

Permalink
Merge branch 'main' into add-flake8-bugbear-b038
Browse files Browse the repository at this point in the history
  • Loading branch information
charliermarsh committed Jan 21, 2024
2 parents 70fb218 + a42600e commit 5f0800b
Show file tree
Hide file tree
Showing 24 changed files with 275 additions and 28 deletions.
48 changes: 48 additions & 0 deletions CHANGELOG.md
@@ -1,5 +1,53 @@
# Changelog

## 0.1.14

### Preview features

- \[`flake8-bugbear`\] Add fix for `duplicate-value` (`B033`) ([#9510](https://github.com/astral-sh/ruff/pull/9510))
- \[`flake8-simplify`\] Implement `enumerate-for-loop` (`SIM113`) ([#7777](https://github.com/astral-sh/ruff/pull/7777))
- \[`pygrep_hooks`\] Add fix for `deprecated-log-warn` (`PGH002`) ([#9519](https://github.com/astral-sh/ruff/pull/9519))
- \[`pylint`\] Implement `import-private-name` (`C2701`) ([#5920](https://github.com/astral-sh/ruff/pull/5920))
- \[`refurb`\] Implement `regex-flag-alias` with fix (`FURB167`) ([#9516](https://github.com/astral-sh/ruff/pull/9516))
- \[`ruff`\] Add rule and fix to sort contents of `__all__` (`RUF022`) ([#9474](https://github.com/astral-sh/ruff/pull/9474))
- \[`tryceratops`\] Add fix for `error-instead-of-exception` (`TRY400`) ([#9520](https://github.com/astral-sh/ruff/pull/9520))

### Rule changes

- \[`flake8-pyi`\] Fix `PYI047` false negatives on PEP-695 type aliases ([#9566](https://github.com/astral-sh/ruff/pull/9566))
- \[`flake8-pyi`\] Fix `PYI049` false negatives on call-based `TypedDict`s ([#9567](https://github.com/astral-sh/ruff/pull/9567))
- \[`pylint`\] Exclude `self` and `cls` when counting method arguments (`PLR0917`) ([#9563](https://github.com/astral-sh/ruff/pull/9563))

### CLI

- `--show-settings` displays active settings in a far more readable format ([#9464](https://github.com/astral-sh/ruff/pull/9464))
- Add `--extension` support to the formatter ([#9483](https://github.com/astral-sh/ruff/pull/9483))

### Configuration

- Ignore preview status for fixable and unfixable selectors ([#9538](https://github.com/astral-sh/ruff/pull/9538))
- \[`pycodestyle`\] Use the configured tab size when expanding indents ([#9506](https://github.com/astral-sh/ruff/pull/9506))

### Bug fixes

- Recursively visit deferred AST nodes ([#9541](https://github.com/astral-sh/ruff/pull/9541))
- Visit deferred lambdas before type definitions ([#9540](https://github.com/astral-sh/ruff/pull/9540))
- \[`flake8-simplify`\] Avoid some more `enumerate-for-loop` false positives (`SIM113`) ([#9515](https://github.com/astral-sh/ruff/pull/9515))
- \[`pandas-vet`\] Limit inplace diagnostics to methods that accept inplace ([#9495](https://github.com/astral-sh/ruff/pull/9495))
- \[`pylint`\] Add the `__prepare__` method to the list of recognized dunder method ([#9529](https://github.com/astral-sh/ruff/pull/9529))
- \[`pylint`\] Ignore unnecessary dunder calls within dunder definitions ([#9496](https://github.com/astral-sh/ruff/pull/9496))
- \[`refurb`\] Avoid bailing when `reimplemented-operator` is called on function (`FURB118`) ([#9556](https://github.com/astral-sh/ruff/pull/9556))
- \[`ruff`\] Avoid treating named expressions as static keys (`RUF011`) ([#9494](https://github.com/astral-sh/ruff/pull/9494))

### Documentation

- Add instructions on using `noqa` with isort rules ([#9555](https://github.com/astral-sh/ruff/pull/9555))
- Documentation update for URL giving 'page not found' ([#9565](https://github.com/astral-sh/ruff/pull/9565))
- Fix admonition in dark mode ([#9502](https://github.com/astral-sh/ruff/pull/9502))
- Update contributing docs to use `cargo bench -p ruff_benchmark` ([#9535](https://github.com/astral-sh/ruff/pull/9535))
- Update emacs integration section to include `emacs-ruff-format` ([#9403](https://github.com/astral-sh/ruff/pull/9403))
- \[`flake8-blind-except`\] Document exceptions to `blind-except` rule ([#9580](https://github.com/astral-sh/ruff/pull/9580))

## 0.1.13

### Bug fixes
Expand Down
6 changes: 3 additions & 3 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion README.md
Expand Up @@ -150,7 +150,7 @@ Ruff can also be used as a [pre-commit](https://pre-commit.com/) hook via [`ruff
```yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.1.13
rev: v0.1.14
hooks:
# Run the linter.
- id: ruff
Expand Down
2 changes: 1 addition & 1 deletion crates/ruff/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "ruff"
version = "0.1.13"
version = "0.1.14"
publish = false
authors = { workspace = true }
edition = { workspace = true }
Expand Down
4 changes: 2 additions & 2 deletions crates/ruff_benchmark/benches/formatter.rs
Expand Up @@ -7,7 +7,7 @@ use ruff_benchmark::{TestCase, TestFile, TestFileDownloadError};
use ruff_python_formatter::{format_module_ast, PreviewMode, PyFormatOptions};
use ruff_python_index::CommentRangesBuilder;
use ruff_python_parser::lexer::lex;
use ruff_python_parser::{parse_tokens, Mode};
use ruff_python_parser::{allocate_tokens_vec, parse_tokens, Mode};

#[cfg(target_os = "windows")]
#[global_allocator]
Expand Down Expand Up @@ -52,7 +52,7 @@ fn benchmark_formatter(criterion: &mut Criterion) {
BenchmarkId::from_parameter(case.name()),
&case,
|b, case| {
let mut tokens = Vec::new();
let mut tokens = allocate_tokens_vec(case.code());
let mut comment_ranges = CommentRangesBuilder::default();

for result in lex(case.code(), Mode::Module) {
Expand Down
2 changes: 1 addition & 1 deletion crates/ruff_linter/Cargo.toml
@@ -1,6 +1,6 @@
[package]
name = "ruff_linter"
version = "0.1.13"
version = "0.1.14"
publish = false
authors = { workspace = true }
edition = { workspace = true }
Expand Down
@@ -0,0 +1,9 @@
print([1, 2, 3][3]) # PLE0643
print([1, 2, 3][-4]) # PLE0643
print([1, 2, 3][999999999999999999999999999999999999999999]) # PLE0643
print([1, 2, 3][-999999999999999999999999999999999999999999]) # PLE0643

print([1, 2, 3][2]) # OK
print([1, 2, 3][0]) # OK
print([1, 2, 3][-3]) # OK
print([1, 2, 3][3:]) # OK
3 changes: 3 additions & 0 deletions crates/ruff_linter/src/checkers/ast/analyze/expression.rs
Expand Up @@ -125,6 +125,9 @@ pub(crate) fn expression(expr: &Expr, checker: &mut Checker) {
if checker.enabled(Rule::SliceCopy) {
refurb::rules::slice_copy(checker, subscript);
}
if checker.enabled(Rule::PotentialIndexError) {
pylint::rules::potential_index_error(checker, value, slice);
}

pandas_vet::rules::subscript(checker, value, expr);
}
Expand Down
1 change: 1 addition & 0 deletions crates/ruff_linter/src/codes.rs
Expand Up @@ -229,6 +229,7 @@ pub fn code_to_rule(linter: Linter, code: &str) -> Option<(RuleGroup, Rule)> {
(Pylint, "E0307") => (RuleGroup::Stable, rules::pylint::rules::InvalidStrReturnType),
(Pylint, "E0604") => (RuleGroup::Stable, rules::pylint::rules::InvalidAllObject),
(Pylint, "E0605") => (RuleGroup::Stable, rules::pylint::rules::InvalidAllFormat),
(Pylint, "E0643") => (RuleGroup::Preview, rules::pylint::rules::PotentialIndexError),
(Pylint, "E0704") => (RuleGroup::Preview, rules::pylint::rules::MisplacedBareRaise),
(Pylint, "E1132") => (RuleGroup::Preview, rules::pylint::rules::RepeatedKeywordArgument),
(Pylint, "E1142") => (RuleGroup::Stable, rules::pylint::rules::AwaitOutsideAsync),
Expand Down
Expand Up @@ -34,6 +34,25 @@ use crate::checkers::ast::Checker;
/// ...
/// ```
///
/// Exceptions that are re-raised will _not_ be flagged, as they're expected to
/// be caught elsewhere:
/// ```python
/// try:
/// foo()
/// except BaseException:
/// raise
/// ```
///
/// Exceptions that are logged via `logging.exception()` or `logging.error()`
/// with `exc_info` enabled will _not_ be flagged, as this is a common pattern
/// for propagating exception traces:
/// ```python
/// try:
/// foo()
/// except BaseException:
/// logging.exception("Something went wrong")
/// ```
///
/// ## References
/// - [Python documentation: The `try` statement](https://docs.python.org/3/reference/compound_stmts.html#the-try-statement)
/// - [Python documentation: Exception hierarchy](https://docs.python.org/3/library/exceptions.html#exception-hierarchy)
Expand Down
1 change: 1 addition & 0 deletions crates/ruff_linter/src/rules/pylint/mod.rs
Expand Up @@ -167,6 +167,7 @@ mod tests {
#[test_case(Rule::NoClassmethodDecorator, Path::new("no_method_decorator.py"))]
#[test_case(Rule::UnnecessaryDunderCall, Path::new("unnecessary_dunder_call.py"))]
#[test_case(Rule::NoStaticmethodDecorator, Path::new("no_method_decorator.py"))]
#[test_case(Rule::PotentialIndexError, Path::new("potential_index_error.py"))]
#[test_case(Rule::SuperWithoutBrackets, Path::new("super_without_brackets.py"))]
#[test_case(
Rule::UnnecessaryDictIndexLookup,
Expand Down
2 changes: 2 additions & 0 deletions crates/ruff_linter/src/rules/pylint/rules/mod.rs
Expand Up @@ -42,6 +42,7 @@ pub(crate) use no_self_use::*;
pub(crate) use non_ascii_module_import::*;
pub(crate) use non_ascii_name::*;
pub(crate) use nonlocal_without_binding::*;
pub(crate) use potential_index_error::*;
pub(crate) use property_with_parameters::*;
pub(crate) use redefined_argument_from_local::*;
pub(crate) use redefined_loop_name::*;
Expand Down Expand Up @@ -124,6 +125,7 @@ mod no_self_use;
mod non_ascii_module_import;
mod non_ascii_name;
mod nonlocal_without_binding;
mod potential_index_error;
mod property_with_parameters;
mod redefined_argument_from_local;
mod redefined_loop_name;
Expand Down
73 changes: 73 additions & 0 deletions crates/ruff_linter/src/rules/pylint/rules/potential_index_error.rs
@@ -0,0 +1,73 @@
use ruff_diagnostics::{Diagnostic, Violation};
use ruff_macros::{derive_message_formats, violation};
use ruff_python_ast::{self as ast, Expr};
use ruff_text_size::Ranged;

use crate::checkers::ast::Checker;

/// ## What it does
/// Checks for hard-coded sequence accesses that are known to be out of bounds.
///
/// ## Why is this bad?
/// Attempting to access a sequence with an out-of-bounds index will cause an
/// `IndexError` to be raised at runtime. When the sequence and index are
/// defined statically (e.g., subscripts on `list` and `tuple` literals, with
/// integer indexes), such errors can be detected ahead of time.
///
/// ## Example
/// ```python
/// print([0, 1, 2][3])
/// ```
#[violation]
pub struct PotentialIndexError;

impl Violation for PotentialIndexError {
#[derive_message_formats]
fn message(&self) -> String {
format!("Potential IndexError")
}
}

/// PLE0643
pub(crate) fn potential_index_error(checker: &mut Checker, value: &Expr, slice: &Expr) {
// Determine the length of the sequence.
let length = match value {
Expr::Tuple(ast::ExprTuple { elts, .. }) | Expr::List(ast::ExprList { elts, .. }) => {
match i64::try_from(elts.len()) {
Ok(length) => length,
Err(_) => return,
}
}
_ => {
return;
}
};

// Determine the index value.
let index = match slice {
Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(number_value),
..
}) => number_value.as_i64(),
Expr::UnaryOp(ast::ExprUnaryOp {
op: ast::UnaryOp::USub,
operand,
..
}) => match operand.as_ref() {
Expr::NumberLiteral(ast::ExprNumberLiteral {
value: ast::Number::Int(number_value),
..
}) => number_value.as_i64().map(|number| -number),
_ => return,
},
_ => return,
};

// Emit a diagnostic if the index is out of bounds. If the index can't be represented as an
// `i64`, but the length _can_, then the index is definitely out of bounds.
if index.map_or(true, |index| index >= length || index < -length) {
checker
.diagnostics
.push(Diagnostic::new(PotentialIndexError, slice.range()));
}
}
@@ -0,0 +1,40 @@
---
source: crates/ruff_linter/src/rules/pylint/mod.rs
---
potential_index_error.py:1:17: PLE0643 Potential IndexError
|
1 | print([1, 2, 3][3]) # PLE0643
| ^ PLE0643
2 | print([1, 2, 3][-4]) # PLE0643
3 | print([1, 2, 3][999999999999999999999999999999999999999999]) # PLE0643
|

potential_index_error.py:2:17: PLE0643 Potential IndexError
|
1 | print([1, 2, 3][3]) # PLE0643
2 | print([1, 2, 3][-4]) # PLE0643
| ^^ PLE0643
3 | print([1, 2, 3][999999999999999999999999999999999999999999]) # PLE0643
4 | print([1, 2, 3][-999999999999999999999999999999999999999999]) # PLE0643
|

potential_index_error.py:3:17: PLE0643 Potential IndexError
|
1 | print([1, 2, 3][3]) # PLE0643
2 | print([1, 2, 3][-4]) # PLE0643
3 | print([1, 2, 3][999999999999999999999999999999999999999999]) # PLE0643
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLE0643
4 | print([1, 2, 3][-999999999999999999999999999999999999999999]) # PLE0643
|

potential_index_error.py:4:17: PLE0643 Potential IndexError
|
2 | print([1, 2, 3][-4]) # PLE0643
3 | print([1, 2, 3][999999999999999999999999999999999999999999]) # PLE0643
4 | print([1, 2, 3][-999999999999999999999999999999999999999999]) # PLE0643
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PLE0643
5 |
6 | print([1, 2, 3][2]) # OK
|


4 changes: 2 additions & 2 deletions crates/ruff_python_index/src/comment_ranges.rs
Expand Up @@ -2,7 +2,7 @@ use std::fmt::Debug;

use ruff_python_ast::PySourceType;
use ruff_python_parser::lexer::{lex, LexResult, LexicalError};
use ruff_python_parser::{AsMode, Tok};
use ruff_python_parser::{allocate_tokens_vec, AsMode, Tok};
use ruff_python_trivia::CommentRanges;
use ruff_text_size::TextRange;

Expand All @@ -28,7 +28,7 @@ pub fn tokens_and_ranges(
source: &str,
source_type: PySourceType,
) -> Result<(Vec<LexResult>, CommentRanges), LexicalError> {
let mut tokens = Vec::new();
let mut tokens = allocate_tokens_vec(source);
let mut comment_ranges = CommentRangesBuilder::default();

for result in lex(source, source_type.as_mode()) {
Expand Down
34 changes: 30 additions & 4 deletions crates/ruff_python_parser/src/lib.rs
Expand Up @@ -78,14 +78,14 @@
//! These tokens can be directly fed into the `ruff_python_parser` to generate an AST:
//!
//! ```
//! use ruff_python_parser::{lexer::lex, Mode, parse_tokens};
//! use ruff_python_parser::{Mode, parse_tokens, tokenize_all};
//!
//! let python_source = r#"
//! def is_odd(i):
//! return bool(i & 1)
//! "#;
//! let tokens = lex(python_source, Mode::Module);
//! let ast = parse_tokens(tokens.collect(), python_source, Mode::Module);
//! let tokens = tokenize_all(python_source, Mode::Module);
//! let ast = parse_tokens(tokens, python_source, Mode::Module);
//!
//! assert!(ast.is_ok());
//! ```
Expand Down Expand Up @@ -133,17 +133,43 @@ pub mod typing;

/// Collect tokens up to and including the first error.
pub fn tokenize(contents: &str, mode: Mode) -> Vec<LexResult> {
let mut tokens: Vec<LexResult> = vec![];
let mut tokens: Vec<LexResult> = allocate_tokens_vec(contents);
for tok in lexer::lex(contents, mode) {
let is_err = tok.is_err();
tokens.push(tok);
if is_err {
break;
}
}

tokens
}

/// Tokenizes all tokens.
///
/// It differs from [`tokenize`] in that it tokenizes all tokens and doesn't stop
/// after the first `Err`.
pub fn tokenize_all(contents: &str, mode: Mode) -> Vec<LexResult> {
let mut tokens = allocate_tokens_vec(contents);
for token in lexer::lex(contents, mode) {
tokens.push(token);
}
tokens
}

/// Allocates a [`Vec`] with an approximated capacity to fit all tokens
/// of `contents`.
///
/// See [#9546](https://github.com/astral-sh/ruff/pull/9546) for a more detailed explanation.
pub fn allocate_tokens_vec(contents: &str) -> Vec<LexResult> {
Vec::with_capacity(approximate_tokens_lower_bound(contents))
}

/// Approximates the number of tokens when lexing `contents`.
fn approximate_tokens_lower_bound(contents: &str) -> usize {
contents.len().saturating_mul(15) / 100
}

/// Parse a full Python program from its tokens.
pub fn parse_program_tokens(
tokens: Vec<LexResult>,
Expand Down

0 comments on commit 5f0800b

Please sign in to comment.