You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pyspark dataframe functions generate Unsupported operand [58] and Incompatible parameter type [6] even though they are valid and even suggested in the Spark documentation.
$ pyre check
ƛ Found 16 type errors!
sample.py:9:23 Unsupported operand [58]: `+` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:9:23 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.withColumn`, for 2nd positional argument, expected `Column` but got `int`.
sample.py:10:23 Unsupported operand [58]: `-` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:10:23 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.withColumn`, for 2nd positional argument, expected `Column` but got `int`.
sample.py:11:23 Unsupported operand [58]: `*` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:11:23 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.withColumn`, for 2nd positional argument, expected `Column` but got `int`.
sample.py:12:23 Unsupported operand [58]: `/` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:12:23 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.withColumn`, for 2nd positional argument, expected `Column` but got `float`.
sample.py:13:12 Unsupported operand [58]: `>` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:13:12 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.filter`, for 1st positional argument, expected `Union[Column, str]` but got `bool`.
sample.py:14:12 Unsupported operand [58]: `>=` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:14:12 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.filter`, for 1st positional argument, expected `Union[Column, str]` but got `bool`.
sample.py:15:12 Unsupported operand [58]: `<` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:15:12 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.filter`, for 1st positional argument, expected `Union[Column, str]` but got `bool`.
sample.py:16:12 Unsupported operand [58]: `<=` is not supported for operand types `pyspark.sql.column.Column` and `int`.
sample.py:16:12 Incompatible parameter type [6]: In call `pyspark.sql.dataframe.DataFrame.filter`, for 1st positional argument, expected `Union[Column, str]` but got `bool`.
As my inspection, a part(or even all) of this issue caused by pyre only treat a function defined by def __add__(): ... as a magic method of a class but Column defines its __add__ and many other magic methods like
Pyre Bug
Bug description
The
pyspark
dataframe functions generateUnsupported operand [58]
andIncompatible parameter type [6]
even though they are valid and even suggested in the Spark documentation.Reproduction steps
Python snippet
sample.py
:Expected behavior
Running the
pyre check
should not throw any issues as the code is valid.See the docs:
Logs
pyre_rage.log
The text was updated successfully, but these errors were encountered: