You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an interesting one as the parser eradicates the duplicates, so we won't see this in the AST. For sets this just means an item is removed — for maps it's potentially worse as the value could differ too, and you don't know which one you'll get.
Not sure how to best approach this. Since we do have the location of the set or map, perhaps we could add a custom built-in function that scanned the source code (which is available in evaluation already) from that location? Can probably use the tokenizer from OPA.
It's fine to only consider top level items if it simplifies the implementation, i.e. not duplicates in nested map structures.
The text was updated successfully, but these errors were encountered:
Created issue in the OPA backlog to have Text included in Location when requested. This would help us parse the text of these sets / objects without having to include the scanner code from OPA, which would need to be copy-pasted as it's marked as internal.
I learnt some day ago that the formatter will have these erased, so this is sorta covered by the opa-fmt rule. While I don't like relying on that, it does make this less interesting to pursue given the effort required. I'll close this for the time being.
This is almost certainly a mistake, and the user probably intended some other key/value in its place:
This is an interesting one as the parser eradicates the duplicates, so we won't see this in the AST. For sets this just means an item is removed — for maps it's potentially worse as the value could differ too, and you don't know which one you'll get.
Not sure how to best approach this. Since we do have the location of the set or map, perhaps we could add a custom built-in function that scanned the source code (which is available in evaluation already) from that location? Can probably use the tokenizer from OPA.
It's fine to only consider top level items if it simplifies the implementation, i.e. not duplicates in nested map structures.
The text was updated successfully, but these errors were encountered: