Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: koxudaxi/datamodel-code-generator
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0.27.2
Choose a base ref
...
head repository: koxudaxi/datamodel-code-generator
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 0.27.3
Choose a head ref
  • 6 commits
  • 116 files changed
  • 2 contributors

Commits on Feb 7, 2025

  1. Reuse extras instead of dependency groups (#2307)

    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    gaborbernat authored Feb 7, 2025
    Copy the full SHA
    ba52cea View commit details
  2. Set line length to 120 charachters (#2310)

    This deletes ~ 1700 lines.
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    gaborbernat authored Feb 7, 2025
    Copy the full SHA
    dea7b4b View commit details
  3. Use src layout (#2311)

    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    gaborbernat authored Feb 7, 2025
    Copy the full SHA
    19134de View commit details

Commits on Feb 10, 2025

  1. [pre-commit.ci] pre-commit autoupdate (#2316)

    updates:
    - [github.com/astral-sh/ruff-pre-commit: v0.9.4 → v0.9.6](astral-sh/ruff-pre-commit@v0.9.4...v0.9.6)
    
    Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
    pre-commit-ci[bot] authored Feb 10, 2025
    Copy the full SHA
    288ef1f View commit details
  2. YML to YAML (#2317)

    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    gaborbernat authored Feb 10, 2025
    Copy the full SHA
    66e5876 View commit details

Commits on Feb 11, 2025

  1. Add more ruff checks and use defaults (#2318)

    * More ruff
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Handle test suite
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Format tests
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Fix config
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Automatic fixes
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Manual test fixes
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Batch 1 src fixes
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Fix type annotations
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Finish type checks
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Restore 3.8 support
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    * Add future imports
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    
    ---------
    
    Signed-off-by: Bernát Gábor <bgabor8@bloomberg.net>
    gaborbernat authored Feb 11, 2025
    Copy the full SHA
    1acaa9b View commit details
Showing with 9,881 additions and 12,142 deletions.
  1. 0 .github/{FUNDING.yml → FUNDING.yaml}
  2. 0 .github/{dependabot.yml → dependabot.yaml}
  3. 0 .github/workflows/{codeql.yml → codeql.yaml}
  4. 0 .github/workflows/{codespell.yml → codespell.yaml}
  5. +1 −1 .github/workflows/{codspeed.yml → codspeed.yaml}
  6. 0 .github/workflows/{docs.yml → docs.yaml}
  7. 0 .github/workflows/{publish.yml → publish.yaml}
  8. +5 −8 .github/workflows/{test.yml → test.yaml}
  9. +4 −7 .pre-commit-config.yaml
  10. +0 −526 datamodel_code_generator/arguments.py
  11. +0 −30 datamodel_code_generator/http.py
  12. +0 −127 datamodel_code_generator/imports.py
  13. +0 −13 datamodel_code_generator/model/imports.py
  14. +0 −47 datamodel_code_generator/model/pydantic/__init__.py
  15. +0 −325 datamodel_code_generator/model/pydantic/base_model.py
  16. +0 −35 datamodel_code_generator/model/pydantic/imports.py
  17. +0 −36 datamodel_code_generator/model/pydantic_v2/__init__.py
  18. +0 −247 datamodel_code_generator/model/pydantic_v2/base_model.py
  19. +0 −5 datamodel_code_generator/model/pydantic_v2/imports.py
  20. +0 −33 datamodel_code_generator/parser/__init__.py
  21. +0 −22 datamodel_code_generator/pydantic_patch.py
  22. 0 mkdocs.yml → mkdocs.yaml
  23. +72 −53 pyproject.toml
  24. +26 −27 scripts/update_command_help_on_markdown.py
  25. +159 −190 { → src}/datamodel_code_generator/__init__.py
  26. +151 −173 { → src}/datamodel_code_generator/__main__.py
  27. +523 −0 src/datamodel_code_generator/arguments.py
  28. +77 −99 { → src}/datamodel_code_generator/format.py
  29. +29 −0 src/datamodel_code_generator/http.py
  30. +120 −0 src/datamodel_code_generator/imports.py
  31. +26 −31 { → src}/datamodel_code_generator/model/__init__.py
  32. +94 −127 { → src}/datamodel_code_generator/model/base.py
  33. +58 −59 { → src}/datamodel_code_generator/model/dataclass.py
  34. +34 −30 { → src}/datamodel_code_generator/model/enum.py
  35. +15 −0 src/datamodel_code_generator/model/imports.py
  36. +112 −126 { → src}/datamodel_code_generator/model/msgspec.py
  37. +34 −0 src/datamodel_code_generator/model/pydantic/__init__.py
  38. +306 −0 src/datamodel_code_generator/model/pydantic/base_model.py
  39. +2 −2 { → src}/datamodel_code_generator/model/pydantic/custom_root_type.py
  40. +6 −4 { → src}/datamodel_code_generator/model/pydantic/dataclass.py
  41. +37 −0 src/datamodel_code_generator/model/pydantic/imports.py
  42. +86 −117 { → src}/datamodel_code_generator/model/pydantic/types.py
  43. +36 −0 src/datamodel_code_generator/model/pydantic_v2/__init__.py
  44. +246 −0 src/datamodel_code_generator/model/pydantic_v2/base_model.py
  45. +7 −0 src/datamodel_code_generator/model/pydantic_v2/imports.py
  46. +6 −6 { → src}/datamodel_code_generator/model/pydantic_v2/root_model.py
  47. +8 −7 { → src}/datamodel_code_generator/model/pydantic_v2/types.py
  48. +1 −1 { → src}/datamodel_code_generator/model/rootmodel.py
  49. +33 −32 { → src}/datamodel_code_generator/model/scalar.py
  50. 0 { → src}/datamodel_code_generator/model/template/Enum.jinja2
  51. 0 { → src}/datamodel_code_generator/model/template/Scalar.jinja2
  52. 0 { → src}/datamodel_code_generator/model/template/TypedDict.jinja2
  53. 0 { → src}/datamodel_code_generator/model/template/TypedDictClass.jinja2
  54. 0 { → src}/datamodel_code_generator/model/template/TypedDictFunction.jinja2
  55. 0 { → src}/datamodel_code_generator/model/template/Union.jinja2
  56. 0 { → src}/datamodel_code_generator/model/template/dataclass.jinja2
  57. 0 { → src}/datamodel_code_generator/model/template/msgspec.jinja2
  58. 0 { → src}/datamodel_code_generator/model/template/pydantic/BaseModel.jinja2
  59. 0 { → src}/datamodel_code_generator/model/template/pydantic/BaseModel_root.jinja2
  60. 0 { → src}/datamodel_code_generator/model/template/pydantic/Config.jinja2
  61. 0 { → src}/datamodel_code_generator/model/template/pydantic/dataclass.jinja2
  62. 0 { → src}/datamodel_code_generator/model/template/pydantic_v2/BaseModel.jinja2
  63. 0 { → src}/datamodel_code_generator/model/template/pydantic_v2/ConfigDict.jinja2
  64. 0 { → src}/datamodel_code_generator/model/template/pydantic_v2/RootModel.jinja2
  65. 0 { → src}/datamodel_code_generator/model/template/root.jinja2
  66. +42 −41 { → src}/datamodel_code_generator/model/typed_dict.py
  67. +19 −17 { → src}/datamodel_code_generator/model/types.py
  68. +21 −17 { → src}/datamodel_code_generator/model/union.py
  69. +34 −0 src/datamodel_code_generator/parser/__init__.py
  70. +320 −492 { → src}/datamodel_code_generator/parser/base.py
  71. +80 −111 { → src}/datamodel_code_generator/parser/graphql.py
  72. +422 −580 { → src}/datamodel_code_generator/parser/jsonschema.py
  73. +175 −204 { → src}/datamodel_code_generator/parser/openapi.py
  74. 0 { → src}/datamodel_code_generator/py.typed
  75. +21 −0 src/datamodel_code_generator/pydantic_patch.py
  76. +192 −274 { → src}/datamodel_code_generator/reference.py
  77. +147 −182 { → src}/datamodel_code_generator/types.py
  78. +22 −26 { → src}/datamodel_code_generator/util.py
  79. +2 −2 tests/data/expected/main/jsonschema/custom_formatters.py
  80. +1 −1 tests/data/expected/main/jsonschema/duplicate_field_constraints/common.py
  81. +1 −1 tests/data/expected/main/jsonschema/duplicate_field_constraints/test.py
  82. +1 −1 tests/data/expected/main/jsonschema/duplicate_field_constraints_msgspec/common.py
  83. +1 −1 tests/data/expected/main/jsonschema/duplicate_field_constraints_msgspec/test.py
  84. +1 −1 .../expected/main/jsonschema/duplicate_field_constraints_msgspec_py38_collapse_root_models/common.py
  85. +1 −1 ...ta/expected/main/jsonschema/duplicate_field_constraints_msgspec_py38_collapse_root_models/test.py
  86. +1 −1 tests/data/jsonschema/duplicate_field_constraints/{common.yml → common.yaml}
  87. +2 −2 tests/data/jsonschema/duplicate_field_constraints/{test.yml → test.yaml}
  88. +4 −4 tests/data/python/custom_formatters/add_license.py
  89. +88 −113 tests/main/graphql/test_annotated.py
  90. +176 −227 tests/main/graphql/test_main_graphql.py
  91. +2,135 −2,753 tests/main/jsonschema/test_main_jsonschema.py
  92. +1,811 −2,354 tests/main/openapi/test_main_openapi.py
  93. +30 −40 tests/main/test_main_csv.py
  94. +47 −55 tests/main/test_main_general.py
  95. +140 −177 tests/main/test_main_json.py
  96. +18 −20 tests/main/test_main_yaml.py
  97. +49 −45 tests/main/test_types.py
  98. +59 −71 tests/model/pydantic/test_base_model.py
  99. +3 −3 tests/model/pydantic/test_constraint.py
  100. +29 −41 tests/model/pydantic/test_custom_root_type.py
  101. +19 −21 tests/model/pydantic/test_data_class.py
  102. +186 −192 tests/model/pydantic/test_types.py
  103. +25 −29 tests/model/pydantic_v2/test_root_model.py
  104. +99 −121 tests/model/test_base.py
  105. +153 −157 tests/parser/test_base.py
  106. +236 −277 tests/parser/test_jsonschema.py
  107. +212 −373 tests/parser/test_openapi.py
  108. +29 −37 tests/test_format.py
  109. +12 −10 tests/test_imports.py
  110. +30 −32 tests/test_infer_input_type.py
  111. +127 −181 tests/test_main_kr.py
  112. +36 −40 tests/test_reference.py
  113. +6 −4 tests/test_resolver.py
  114. +24 −22 tests/test_types.py
  115. +13 −10 tox.ini
  116. +265 −232 uv.lock
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -26,7 +26,7 @@ jobs:
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
- name: Install dependencies
run: uv sync
run: uv sync --all-extras
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
File renamed without changes.
File renamed without changes.
13 changes: 5 additions & 8 deletions .github/workflows/test.yml → .github/workflows/test.yaml
Original file line number Diff line number Diff line change
@@ -26,7 +26,7 @@ jobs:
- tox_env: py3.12-black24
- tox_env: py3.12-black23
- tox_env: py3.12-black22
- tox_env: py3.9-black19
- tox_env: py3.8-black19
- tox_env: py3.8-pydantic18
- tox_env: py3.8-isort4
runs-on: ${{ matrix.os == '' && 'ubuntu-24.04' || matrix.os }}
@@ -38,6 +38,9 @@ jobs:
fetch-depth: 0
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
cache-suffix: "${{ matrix.py || matrix.tox_env }}"
- name: Install tox
run: uv tool install --python-preference only-managed --python 3.13 tox --with tox-uv
- name: Setup Python test environment
@@ -71,14 +74,8 @@ jobs:
fetch-depth: 0
- name: Install the latest version of uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
cache-dependency-glob: "pyproject.toml"
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install hatch
- name: Install tox
run: uv tool install --python-preference only-managed --python 3.13 tox --with tox-uv
- name: Build package to generate version
run: uv build --python 3.13 --python-preference only-managed --wheel . --out-dir dist
- name: Setup coverage tool
run: tox run -e coverage --notest
env:
11 changes: 4 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -14,17 +14,14 @@ repos:
hooks:
- id: pyproject-fmt
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: 'v0.9.4'
rev: 'v0.9.6'
hooks:
- id: ruff
files: "^datamodel_code_generator|^tests"
exclude: "^tests/data"
args: [ --fix ]
- id: ruff-format
files: "^datamodel_code_generator|^tests"
exclude: "^tests/data"
- id: ruff
exclude: "^tests/data"
args: ["--exit-non-zero-on-fix"]
- repo: https://github.com/codespell-project/codespell
# Configuration for codespell is in pyproject.toml
rev: v2.4.1
hooks:
- id: codespell
526 changes: 0 additions & 526 deletions datamodel_code_generator/arguments.py

This file was deleted.

30 changes: 0 additions & 30 deletions datamodel_code_generator/http.py

This file was deleted.

127 changes: 0 additions & 127 deletions datamodel_code_generator/imports.py

This file was deleted.

13 changes: 0 additions & 13 deletions datamodel_code_generator/model/imports.py

This file was deleted.

47 changes: 0 additions & 47 deletions datamodel_code_generator/model/pydantic/__init__.py

This file was deleted.

325 changes: 0 additions & 325 deletions datamodel_code_generator/model/pydantic/base_model.py

This file was deleted.

35 changes: 0 additions & 35 deletions datamodel_code_generator/model/pydantic/imports.py

This file was deleted.

36 changes: 0 additions & 36 deletions datamodel_code_generator/model/pydantic_v2/__init__.py

This file was deleted.

247 changes: 0 additions & 247 deletions datamodel_code_generator/model/pydantic_v2/base_model.py

This file was deleted.

5 changes: 0 additions & 5 deletions datamodel_code_generator/model/pydantic_v2/imports.py

This file was deleted.

33 changes: 0 additions & 33 deletions datamodel_code_generator/parser/__init__.py

This file was deleted.

22 changes: 0 additions & 22 deletions datamodel_code_generator/pydantic_patch.py

This file was deleted.

File renamed without changes.
125 changes: 72 additions & 53 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -29,7 +29,6 @@ classifiers = [
dynamic = [
"version",
]

dependencies = [
"argcomplete>=2.10.1,<4",
"black>=19.10b0",
@@ -42,7 +41,12 @@ dependencies = [
"pyyaml>=6.0.1",
"tomli>=2.2.1,<3; python_version<='3.11'",
]

optional-dependencies.all = [
"datamodel-code-generator[debug]",
"datamodel-code-generator[graphql]",
"datamodel-code-generator[http]",
"datamodel-code-generator[validation]",
]
optional-dependencies.debug = [
"pysnooper>=0.4.1,<2",
]
@@ -63,18 +67,13 @@ scripts.datamodel-codegen = "datamodel_code_generator.__main__:main"
[dependency-groups]
dev = [
{ include-group = "coverage" },
{ include-group = "debug" },
{ include-group = "docs" },
{ include-group = "fix" },
{ include-group = "graphql" },
{ include-group = "http" },
{ include-group = "pkg-meta" },
{ include-group = "test" },
{ include-group = "type" },
{ include-group = "validation" },
]
test = [
"diff-cover>=7.7",
"freezegun",
"pytest>=6.1",
"pytest>=8.3.4",
@@ -84,11 +83,8 @@ test = [
"pytest-cov>=5",
"pytest-mock>=3.14",
"pytest-xdist>=3.3.1",
"setuptools; python_version<'3.9'",
{ include-group = "debug" },
{ include-group = "graphql" },
{ include-group = "http" },
{ include-group = "validation" },
"setuptools; python_version<'3.10'", # PyCharm debugger needs it
{ include-group = "coverage" },
]
type = [
"pyright>=1.1.393",
@@ -102,19 +98,6 @@ docs = [
"mkdocs>=1.6",
"mkdocs-material>=9.5.31",
]
debug = [
"pysnooper>=0.4.1,<2",
]
graphql = [
"graphql-core>=3.2.3",
]
http = [
"httpx>=0.24.1",
]
validation = [
"openapi-spec-validator>=0.2.8,<0.7",
"prance>=0.18.2",
]
black19-pydantic18 = [ "black==19.10b0", "pydantic==1.8.2" ]
black22 = [ "black==22.1" ]
black23 = [ "black==23.12" ]
@@ -124,28 +107,60 @@ isort4-pydantic15 = [ "isort[pyproject]==4.3.21", "pydantic==1.5.1" ]
fix = [ "pre-commit>=3.5" ]
pkg-meta = [ "check-wheel-contents>=0.6.1", "twine>=6.1", "uv>=0.5.22" ]
coverage = [
"covdefaults>=2.3",
"coverage[toml]>=7.6.1",
"diff-cover>=7.7",
]

[tool.hatch]
build.dev-mode-dirs = [ "." ]
build.dev-mode-dirs = [ "src" ]
build.targets.sdist.include = [
"/datamodel_code_generator",
"/src",
"/tests",
]
version.source = "vcs"

[tool.ruff]
line-length = 88
line-length = 120
extend-exclude = [ "tests/data" ]
format.indent-style = "space"
format.quote-style = "single"
format.line-ending = "auto"
format.skip-magic-trailing-comma = false
lint.extend-select = [ "C4", "I", "Q", "RUF100", "UP" ]
lint.ignore = [ "E501", "Q000", "Q003", "UP006", "UP007" ]
lint.flake8-quotes = { inline-quotes = 'single', multiline-quotes = 'double' }
format.preview = true
format.docstring-code-format = true
lint.select = [
"ALL",
]
lint.ignore = [
"ANN401", # Any as type annotation is allowed
"C901", # complex structure
"COM812", # Conflict with formatter
"CPY", # No copyright statements
"D", # limited documentation
"DOC", # limited documentation
"FIX002", # line contains to do
"ISC001", # Conflict with formatter
"S101", # can use assert
"TD002", # missing to do author
"TD003", # missing to do link
"TD004", # missing colon in to do
]
lint.per-file-ignores."tests/**/*.py" = [
"FBT", # don't care about booleans as positional arguments in tests
"INP001", # no implicit namespace
"PLC2701", # private import is fine
"PLR0913", # as many arguments as want
"PLR0915", # can have longer test methods
"PLR0917", # as many arguments as want
"PLR2004", # Magic value used in comparison, consider replacing with a constant variable
"S", # no safety concerns
"SLF001", # can test private methods
]
lint.isort = { known-first-party = [
"datamodel_code_generator",
"tests",
], required-imports = [
"from __future__ import annotations",
] }

lint.preview = true

[tool.codespell]
skip = '.git,*.lock,tests'
@@ -166,27 +181,31 @@ filterwarnings = [
"ignore:^.*`experimental string processing` has been included in `preview` and deprecated. Use `preview` instead..*",
]
norecursedirs = "tests/data/*"
verbosity_assertions = 2

[tool.coverage]
run.source = [ "datamodel_code_generator" ]
run.branch = true
run.omit = [ "scripts/*", "tests/*" ]
report.ignore_errors = true
report.exclude_lines = [
"if self.debug:",
"pragma: no cover",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"if TYPE_CHECKING:",
"if not TYPE_CHECKING:",
]
html.skip_covered = false
html.show_contexts = false
paths.source = [
"datamodel_code_generator",
".tox*/*/lib/python*/site-packages/datamodel_code_generator",
".tox*\\*\\Lib\\site-packages\\datamodel_code_generator",
"*/datamodel_code_generator",
"*\\datamodel_code_generator",
]
"src",
".tox*/*/lib/python*/site-packages",
".tox*\\*\\Lib\\site-packages",
"*/src",
"*\\src",
]
paths.other = [
".",
"*/datamodel-code-generator",
"*\\datamodel-code-generator",
]
run.dynamic_context = "none"
run.omit = [ "tests/data/*" ]
report.fail_under = 88
run.parallel = true
run.plugins = [
"covdefaults",
]
covdefaults.subtract_omit = "*/__main__.py"

[tool.pyright]
reportPrivateImportUsage = false
53 changes: 26 additions & 27 deletions scripts/update_command_help_on_markdown.py
Original file line number Diff line number Diff line change
@@ -1,39 +1,41 @@
from __future__ import annotations # noqa: INP001

import io
import os
import re
import sys
from pathlib import Path

from datamodel_code_generator.__main__ import Exit, arg_parser
from datamodel_code_generator.__main__ import Exit, arg_parser # noqa: PLC2701

os.environ['COLUMNS'] = '94'
os.environ['LINES'] = '24'
os.environ["COLUMNS"] = "94"
os.environ["LINES"] = "24"

START_MARK: str = '<!-- start command help -->'
END_MARK: str = '<!-- end command help -->'
BASH_CODE_BLOCK: str = '```bash'
CODE_BLOCK_END: str = '```'
START_MARK: str = "<!-- start command help -->"
END_MARK: str = "<!-- end command help -->"
BASH_CODE_BLOCK: str = "```bash"
CODE_BLOCK_END: str = "```"

CURRENT_DIR = Path(__file__).parent
PROJECT_DIR = CURRENT_DIR.parent
DOC_DIR = PROJECT_DIR / 'docs'
DOC_DIR = PROJECT_DIR / "docs"

TARGET_MARKDOWN_FILES: list[Path] = [
DOC_DIR / 'index.md',
PROJECT_DIR / 'README.md',
DOC_DIR / "index.md",
PROJECT_DIR / "README.md",
]

REPLACE_MAP = {'(default: UTF-8)': '(default: utf-8)', "'": r"''"}
REPLACE_MAP = {"(default: UTF-8)": "(default: utf-8)", "'": r"''"}


def get_help():
def get_help() -> str:
with io.StringIO() as f:
arg_parser.print_help(file=f)
output = f.getvalue()
for k, v in REPLACE_MAP.items():
output = output.replace(k, v)
# Remove any terminal codes
return re.sub(r'\x1b\[[0-?]*[ -/]*[@-~]', '', output)
return re.sub(r"\x1b\[[0-?]*[ -/]*[@-~]", "", output)


def inject_help(markdown_text: str, help_text: str) -> str:
@@ -43,43 +45,40 @@ def inject_help(markdown_text: str, help_text: str) -> str:
start_pos = markdown_text.find(START_MARK)
end_pos = markdown_text.find(END_MARK)
if start_pos == -1 or end_pos == -1:
raise ValueError(f'Could not find {START_MARK} or {END_MARK} in markdown_text')
msg = f"Could not find {START_MARK} or {END_MARK} in markdown_text"
raise ValueError(msg)
return (
markdown_text[: start_pos + len(START_MARK)]
+ '\n'
+ "\n"
+ BASH_CODE_BLOCK
+ '\n'
+ "\n"
+ help_text
+ CODE_BLOCK_END
+ '\n'
+ "\n"
+ markdown_text[end_pos:]
)


def main() -> Exit:
help_text = get_help()
arg_parser.add_argument(
'--validate',
action='store_true',
help='Validate the file content is up to date',
"--validate",
action="store_true",
help="Validate the file content is up to date",
)
args = arg_parser.parse_args()
validate: bool = args.validate

for file_path in TARGET_MARKDOWN_FILES:
with file_path.open('r') as f:
with file_path.open("r") as f:
markdown_text = f.read()
new_markdown_text = inject_help(markdown_text, help_text)
if validate and new_markdown_text != markdown_text:
print(
f'{file_path} is not up to date. Run `python update_command_help_on_markdown.py`',
file=sys.stderr,
)
return Exit.ERROR
with file_path.open('w') as f:
with file_path.open("w") as f:
f.write(new_markdown_text)
return Exit.OK


if __name__ == '__main__':
if __name__ == "__main__":
sys.exit(main())

Large diffs are not rendered by default.

Large diffs are not rendered by default.

523 changes: 523 additions & 0 deletions src/datamodel_code_generator/arguments.py

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -3,7 +3,7 @@
from enum import Enum
from importlib import import_module
from pathlib import Path
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence
from typing import TYPE_CHECKING, Any, Sequence
from warnings import warn

import black
@@ -18,28 +18,28 @@


class DatetimeClassType(Enum):
Datetime = 'datetime'
Awaredatetime = 'AwareDatetime'
Naivedatetime = 'NaiveDatetime'
Datetime = "datetime"
Awaredatetime = "AwareDatetime"
Naivedatetime = "NaiveDatetime"


class PythonVersion(Enum):
PY_36 = '3.6'
PY_37 = '3.7'
PY_38 = '3.8'
PY_39 = '3.9'
PY_310 = '3.10'
PY_311 = '3.11'
PY_312 = '3.12'
PY_313 = '3.13'
PY_36 = "3.6"
PY_37 = "3.7"
PY_38 = "3.8"
PY_39 = "3.9"
PY_310 = "3.10"
PY_311 = "3.11"
PY_312 = "3.12"
PY_313 = "3.13"

@cached_property
def _is_py_38_or_later(self) -> bool: # pragma: no cover
return self.value not in {self.PY_36.value, self.PY_37.value} # type: ignore
return self.value not in {self.PY_36.value, self.PY_37.value}

@cached_property
def _is_py_39_or_later(self) -> bool: # pragma: no cover
return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value} # type: ignore
return self.value not in {self.PY_36.value, self.PY_37.value, self.PY_38.value}

@cached_property
def _is_py_310_or_later(self) -> bool: # pragma: no cover
@@ -48,7 +48,7 @@ def _is_py_310_or_later(self) -> bool: # pragma: no cover
self.PY_37.value,
self.PY_38.value,
self.PY_39.value,
} # type: ignore
}

@cached_property
def _is_py_311_or_later(self) -> bool: # pragma: no cover
@@ -58,7 +58,7 @@ def _is_py_311_or_later(self) -> bool: # pragma: no cover
self.PY_38.value,
self.PY_39.value,
self.PY_310.value,
} # type: ignore
}

@property
def has_literal_type(self) -> bool:
@@ -89,12 +89,12 @@ def has_kw_only_dataclass(self) -> bool:

class _TargetVersion(Enum): ...

BLACK_PYTHON_VERSION: Dict[PythonVersion, _TargetVersion]
BLACK_PYTHON_VERSION: dict[PythonVersion, _TargetVersion]
else:
BLACK_PYTHON_VERSION: Dict[PythonVersion, black.TargetVersion] = {
v: getattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
BLACK_PYTHON_VERSION: dict[PythonVersion, black.TargetVersion] = {
v: getattr(black.TargetVersion, f"PY{v.name.split('_')[-1]}")
for v in PythonVersion
if hasattr(black.TargetVersion, f'PY{v.name.split("_")[-1]}')
if hasattr(black.TargetVersion, f"PY{v.name.split('_')[-1]}")
}


@@ -104,132 +104,111 @@ def is_supported_in_black(python_version: PythonVersion) -> bool: # pragma: no

def black_find_project_root(sources: Sequence[Path]) -> Path:
if TYPE_CHECKING:
from typing import Iterable, Tuple, Union
from typing import Iterable # noqa: PLC0415

def _find_project_root(
srcs: Union[Sequence[str], Iterable[str]],
) -> Union[Tuple[Path, str], Path]: ...
srcs: Sequence[str] | Iterable[str],
) -> tuple[Path, str] | Path: ...

else:
from black import find_project_root as _find_project_root
from black import find_project_root as _find_project_root # noqa: PLC0415
project_root = _find_project_root(tuple(str(s) for s in sources))
if isinstance(project_root, tuple):
return project_root[0]
else: # pragma: no cover
return project_root
# pragma: no cover
return project_root


class CodeFormatter:
def __init__(
def __init__( # noqa: PLR0912, PLR0913, PLR0917
self,
python_version: PythonVersion,
settings_path: Optional[Path] = None,
wrap_string_literal: Optional[bool] = None,
skip_string_normalization: bool = True,
known_third_party: Optional[List[str]] = None,
custom_formatters: Optional[List[str]] = None,
custom_formatters_kwargs: Optional[Dict[str, Any]] = None,
settings_path: Path | None = None,
wrap_string_literal: bool | None = None, # noqa: FBT001
skip_string_normalization: bool = True, # noqa: FBT001, FBT002
known_third_party: list[str] | None = None,
custom_formatters: list[str] | None = None,
custom_formatters_kwargs: dict[str, Any] | None = None,
) -> None:
if not settings_path:
settings_path = Path().resolve()
settings_path = Path.cwd()

root = black_find_project_root((settings_path,))
path = root / 'pyproject.toml'
path = root / "pyproject.toml"
if path.is_file():
pyproject_toml = load_toml(path)
config = pyproject_toml.get('tool', {}).get('black', {})
config = pyproject_toml.get("tool", {}).get("black", {})
else:
config = {}

black_kwargs: Dict[str, Any] = {}
black_kwargs: dict[str, Any] = {}
if wrap_string_literal is not None:
experimental_string_processing = wrap_string_literal
elif black.__version__ < "24.1.0":
experimental_string_processing = config.get("experimental-string-processing")
else:
if black.__version__ < '24.1.0': # type: ignore
experimental_string_processing = config.get(
'experimental-string-processing'
)
else:
experimental_string_processing = config.get(
'preview', False
) and ( # pragma: no cover
config.get('unstable', False)
or 'string_processing' in config.get('enable-unstable-feature', [])
)
experimental_string_processing = config.get("preview", False) and ( # pragma: no cover
config.get("unstable", False) or "string_processing" in config.get("enable-unstable-feature", [])
)

if experimental_string_processing is not None: # pragma: no cover
if black.__version__.startswith('19.'): # type: ignore
if black.__version__.startswith("19."):
warn(
f"black doesn't support `experimental-string-processing` option" # type: ignore
f' for wrapping string literal in {black.__version__}'
)
elif black.__version__ < '24.1.0': # type: ignore
black_kwargs['experimental_string_processing'] = (
experimental_string_processing
f"black doesn't support `experimental-string-processing` option"
f" for wrapping string literal in {black.__version__}",
stacklevel=2,
)
elif black.__version__ < "24.1.0":
black_kwargs["experimental_string_processing"] = experimental_string_processing
elif experimental_string_processing:
black_kwargs['preview'] = True
black_kwargs['unstable'] = config.get('unstable', False)
black_kwargs['enabled_features'] = {
black.mode.Preview.string_processing
}
black_kwargs["preview"] = True
black_kwargs["unstable"] = config.get("unstable", False)
black_kwargs["enabled_features"] = {black.mode.Preview.string_processing}

if TYPE_CHECKING:
self.black_mode: black.FileMode
else:
self.black_mode = black.FileMode(
target_versions={BLACK_PYTHON_VERSION[python_version]},
line_length=config.get('line-length', black.DEFAULT_LINE_LENGTH),
string_normalization=not skip_string_normalization
or not config.get('skip-string-normalization', True),
line_length=config.get("line-length", black.DEFAULT_LINE_LENGTH),
string_normalization=not skip_string_normalization or not config.get("skip-string-normalization", True),
**black_kwargs,
)

self.settings_path: str = str(settings_path)

self.isort_config_kwargs: Dict[str, Any] = {}
self.isort_config_kwargs: dict[str, Any] = {}
if known_third_party:
self.isort_config_kwargs['known_third_party'] = known_third_party
self.isort_config_kwargs["known_third_party"] = known_third_party

if isort.__version__.startswith('4.'):
if isort.__version__.startswith("4."):
self.isort_config = None
else:
self.isort_config = isort.Config(
settings_path=self.settings_path, **self.isort_config_kwargs
)
self.isort_config = isort.Config(settings_path=self.settings_path, **self.isort_config_kwargs)

self.custom_formatters_kwargs = custom_formatters_kwargs or {}
self.custom_formatters = self._check_custom_formatters(custom_formatters)

def _load_custom_formatter(
self, custom_formatter_import: str
) -> CustomCodeFormatter:
def _load_custom_formatter(self, custom_formatter_import: str) -> CustomCodeFormatter:
import_ = import_module(custom_formatter_import)

if not hasattr(import_, 'CodeFormatter'):
raise NameError(
f'Custom formatter module `{import_.__name__}` must contains object with name Formatter'
)
if not hasattr(import_, "CodeFormatter"):
msg = f"Custom formatter module `{import_.__name__}` must contains object with name Formatter"
raise NameError(msg)

formatter_class = import_.__getattribute__('CodeFormatter')
formatter_class = import_.__getattribute__("CodeFormatter") # noqa: PLC2801

if not issubclass(formatter_class, CustomCodeFormatter):
raise TypeError(
f'The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`'
)
msg = f"The custom module {custom_formatter_import} must inherit from `datamodel-code-generator`"
raise TypeError(msg)

return formatter_class(formatter_kwargs=self.custom_formatters_kwargs)

def _check_custom_formatters(
self, custom_formatters: Optional[List[str]]
) -> List[CustomCodeFormatter]:
def _check_custom_formatters(self, custom_formatters: list[str] | None) -> list[CustomCodeFormatter]:
if custom_formatters is None:
return []

return [
self._load_custom_formatter(custom_formatter_import)
for custom_formatter_import in custom_formatters
]
return [self._load_custom_formatter(custom_formatter_import) for custom_formatter_import in custom_formatters]

def format_code(
self,
@@ -253,24 +232,23 @@ def apply_black(self, code: str) -> str:

def apply_isort(self, code: str) -> str: ...

else:
if isort.__version__.startswith('4.'):
elif isort.__version__.startswith("4."):

def apply_isort(self, code: str) -> str:
return isort.SortImports(
file_contents=code,
settings_path=self.settings_path,
**self.isort_config_kwargs,
).output
def apply_isort(self, code: str) -> str:
return isort.SortImports(
file_contents=code,
settings_path=self.settings_path,
**self.isort_config_kwargs,
).output

else:
else:

def apply_isort(self, code: str) -> str:
return isort.code(code, config=self.isort_config)
def apply_isort(self, code: str) -> str:
return isort.code(code, config=self.isort_config)


class CustomCodeFormatter:
def __init__(self, formatter_kwargs: Dict[str, Any]) -> None:
def __init__(self, formatter_kwargs: dict[str, Any]) -> None:
self.formatter_kwargs = formatter_kwargs

def apply(self, code: str) -> str:
29 changes: 29 additions & 0 deletions src/datamodel_code_generator/http.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from __future__ import annotations

from typing import Sequence

try:
import httpx
except ImportError as exc: # pragma: no cover
msg = "Please run `$pip install 'datamodel-code-generator[http]`' to resolve URL Reference"
raise Exception(msg) from exc # noqa: TRY002


def get_body(
url: str,
headers: Sequence[tuple[str, str]] | None = None,
ignore_tls: bool = False, # noqa: FBT001, FBT002
query_parameters: Sequence[tuple[str, str]] | None = None,
) -> str:
return httpx.get(
url,
headers=headers,
verify=not ignore_tls,
follow_redirects=True,
params=query_parameters, # pyright: ignore[reportArgumentType]
# TODO: Improve params type
).text


def join_url(url: str, ref: str = ".") -> str:
return str(httpx.URL(url).join(ref))
120 changes: 120 additions & 0 deletions src/datamodel_code_generator/imports.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
from __future__ import annotations

from collections import defaultdict
from functools import lru_cache
from itertools import starmap
from typing import DefaultDict, Iterable, Optional, Set

from datamodel_code_generator.util import BaseModel


class Import(BaseModel):
from_: Optional[str] = None # noqa: UP045
import_: str
alias: Optional[str] = None # noqa: UP045
reference_path: Optional[str] = None # noqa: UP045

@classmethod
@lru_cache
def from_full_path(cls, class_path: str) -> Import:
split_class_path: list[str] = class_path.split(".")
return Import(from_=".".join(split_class_path[:-1]) or None, import_=split_class_path[-1])


class Imports(DefaultDict[Optional[str], Set[str]]):
def __str__(self) -> str:
return self.dump()

def __init__(self, use_exact: bool = False) -> None: # noqa: FBT001, FBT002
super().__init__(set)
self.alias: defaultdict[str | None, dict[str, str]] = defaultdict(dict)
self.counter: dict[tuple[str | None, str], int] = defaultdict(int)
self.reference_paths: dict[str, Import] = {}
self.use_exact: bool = use_exact

def _set_alias(self, from_: str | None, imports: set[str]) -> list[str]:
return [
f"{i} as {self.alias[from_][i]}" if i in self.alias[from_] and i != self.alias[from_][i] else i
for i in sorted(imports)
]

def create_line(self, from_: str | None, imports: set[str]) -> str:
if from_:
return f"from {from_} import {', '.join(self._set_alias(from_, imports))}"
return "\n".join(f"import {i}" for i in self._set_alias(from_, imports))

def dump(self) -> str:
return "\n".join(starmap(self.create_line, self.items()))

def append(self, imports: Import | Iterable[Import] | None) -> None:
if imports:
if isinstance(imports, Import):
imports = [imports]
for import_ in imports:
if import_.reference_path:
self.reference_paths[import_.reference_path] = import_
if "." in import_.import_:
self[None].add(import_.import_)
self.counter[None, import_.import_] += 1
else:
self[import_.from_].add(import_.import_)
self.counter[import_.from_, import_.import_] += 1
if import_.alias:
self.alias[import_.from_][import_.import_] = import_.alias

def remove(self, imports: Import | Iterable[Import]) -> None:
if isinstance(imports, Import): # pragma: no cover
imports = [imports]
for import_ in imports:
if "." in import_.import_: # pragma: no cover
self.counter[None, import_.import_] -= 1
if self.counter[None, import_.import_] == 0: # pragma: no cover
self[None].remove(import_.import_)
if not self[None]:
del self[None]
else:
self.counter[import_.from_, import_.import_] -= 1 # pragma: no cover
if self.counter[import_.from_, import_.import_] == 0: # pragma: no cover
self[import_.from_].remove(import_.import_)
if not self[import_.from_]:
del self[import_.from_]
if import_.alias: # pragma: no cover
del self.alias[import_.from_][import_.import_]
if not self.alias[import_.from_]:
del self.alias[import_.from_]

def remove_referenced_imports(self, reference_path: str) -> None:
if reference_path in self.reference_paths:
self.remove(self.reference_paths[reference_path])


IMPORT_ANNOTATED = Import.from_full_path("typing.Annotated")
IMPORT_ANNOTATED_BACKPORT = Import.from_full_path("typing_extensions.Annotated")
IMPORT_ANY = Import.from_full_path("typing.Any")
IMPORT_LIST = Import.from_full_path("typing.List")
IMPORT_SET = Import.from_full_path("typing.Set")
IMPORT_UNION = Import.from_full_path("typing.Union")
IMPORT_OPTIONAL = Import.from_full_path("typing.Optional")
IMPORT_LITERAL = Import.from_full_path("typing.Literal")
IMPORT_TYPE_ALIAS = Import.from_full_path("typing.TypeAlias")
IMPORT_LITERAL_BACKPORT = Import.from_full_path("typing_extensions.Literal")
IMPORT_SEQUENCE = Import.from_full_path("typing.Sequence")
IMPORT_FROZEN_SET = Import.from_full_path("typing.FrozenSet")
IMPORT_MAPPING = Import.from_full_path("typing.Mapping")
IMPORT_ABC_SEQUENCE = Import.from_full_path("collections.abc.Sequence")
IMPORT_ABC_SET = Import.from_full_path("collections.abc.Set")
IMPORT_ABC_MAPPING = Import.from_full_path("collections.abc.Mapping")
IMPORT_ENUM = Import.from_full_path("enum.Enum")
IMPORT_ANNOTATIONS = Import.from_full_path("__future__.annotations")
IMPORT_DICT = Import.from_full_path("typing.Dict")
IMPORT_DECIMAL = Import.from_full_path("decimal.Decimal")
IMPORT_DATE = Import.from_full_path("datetime.date")
IMPORT_DATETIME = Import.from_full_path("datetime.datetime")
IMPORT_TIMEDELTA = Import.from_full_path("datetime.timedelta")
IMPORT_PATH = Import.from_full_path("pathlib.Path")
IMPORT_TIME = Import.from_full_path("datetime.time")
IMPORT_UUID = Import.from_full_path("uuid.UUID")
IMPORT_PENDULUM_DATE = Import.from_full_path("pendulum.Date")
IMPORT_PENDULUM_DATETIME = Import.from_full_path("pendulum.DateTime")
IMPORT_PENDULUM_DURATION = Import.from_full_path("pendulum.Duration")
IMPORT_PENDULUM_TIME = Import.from_full_path("pendulum.Time")
Original file line number Diff line number Diff line change
@@ -1,38 +1,38 @@
from __future__ import annotations

import sys
from typing import TYPE_CHECKING, Callable, Iterable, List, NamedTuple, Optional, Type
from typing import TYPE_CHECKING, Callable, Iterable, NamedTuple

from datamodel_code_generator import DatetimeClassType, PythonVersion

from .. import DatetimeClassType, PythonVersion
from ..types import DataTypeManager as DataTypeManagerABC
from .base import ConstraintsBase, DataModel, DataModelFieldBase

if TYPE_CHECKING:
from .. import DataModelType
from datamodel_code_generator import DataModelType
from datamodel_code_generator.types import DataTypeManager as DataTypeManagerABC

DEFAULT_TARGET_DATETIME_CLASS = DatetimeClassType.Datetime
DEFAULT_TARGET_PYTHON_VERSION = PythonVersion(
f'{sys.version_info.major}.{sys.version_info.minor}'
)
DEFAULT_TARGET_PYTHON_VERSION = PythonVersion(f"{sys.version_info.major}.{sys.version_info.minor}")


class DataModelSet(NamedTuple):
data_model: Type[DataModel]
root_model: Type[DataModel]
field_model: Type[DataModelFieldBase]
data_type_manager: Type[DataTypeManagerABC]
dump_resolve_reference_action: Optional[Callable[[Iterable[str]], str]]
known_third_party: Optional[List[str]] = None
data_model: type[DataModel]
root_model: type[DataModel]
field_model: type[DataModelFieldBase]
data_type_manager: type[DataTypeManagerABC]
dump_resolve_reference_action: Callable[[Iterable[str]], str] | None
known_third_party: list[str] | None = None


def get_data_model_types(
data_model_type: DataModelType,
target_python_version: PythonVersion = DEFAULT_TARGET_PYTHON_VERSION,
target_datetime_class: Optional[DatetimeClassType] = None,
target_datetime_class: DatetimeClassType | None = None,
) -> DataModelSet:
from .. import DataModelType
from . import dataclass, msgspec, pydantic, pydantic_v2, rootmodel, typed_dict
from .types import DataTypeManager
from datamodel_code_generator import DataModelType # noqa: PLC0415

from . import dataclass, msgspec, pydantic, pydantic_v2, rootmodel, typed_dict # noqa: PLC0415
from .types import DataTypeManager # noqa: PLC0415

if target_datetime_class is None:
target_datetime_class = DEFAULT_TARGET_DATETIME_CLASS
@@ -44,29 +44,25 @@ def get_data_model_types(
data_type_manager=pydantic.DataTypeManager,
dump_resolve_reference_action=pydantic.dump_resolve_reference_action,
)
elif data_model_type == DataModelType.PydanticV2BaseModel:
if data_model_type == DataModelType.PydanticV2BaseModel:
return DataModelSet(
data_model=pydantic_v2.BaseModel,
root_model=pydantic_v2.RootModel,
field_model=pydantic_v2.DataModelField,
data_type_manager=pydantic_v2.DataTypeManager,
dump_resolve_reference_action=pydantic_v2.dump_resolve_reference_action,
)
elif data_model_type == DataModelType.DataclassesDataclass:
if data_model_type == DataModelType.DataclassesDataclass:
return DataModelSet(
data_model=dataclass.DataClass,
root_model=rootmodel.RootModel,
field_model=dataclass.DataModelField,
data_type_manager=dataclass.DataTypeManager,
dump_resolve_reference_action=None,
)
elif data_model_type == DataModelType.TypingTypedDict:
if data_model_type == DataModelType.TypingTypedDict:
return DataModelSet(
data_model=(
typed_dict.TypedDict
if target_python_version.has_typed_dict
else typed_dict.TypedDictBackport
),
data_model=(typed_dict.TypedDict if target_python_version.has_typed_dict else typed_dict.TypedDictBackport),
root_model=rootmodel.RootModel,
field_model=(
typed_dict.DataModelField
@@ -76,18 +72,17 @@ def get_data_model_types(
data_type_manager=DataTypeManager,
dump_resolve_reference_action=None,
)
elif data_model_type == DataModelType.MsgspecStruct:
if data_model_type == DataModelType.MsgspecStruct:
return DataModelSet(
data_model=msgspec.Struct,
root_model=msgspec.RootModel,
field_model=msgspec.DataModelField,
data_type_manager=msgspec.DataTypeManager,
dump_resolve_reference_action=None,
known_third_party=['msgspec'],
known_third_party=["msgspec"],
)
raise ValueError(
f'{data_model_type} is unsupported data model type'
) # pragma: no cover
msg = f"{data_model_type} is unsupported data model type"
raise ValueError(msg) # pragma: no cover


__all__ = ['ConstraintsBase', 'DataModel', 'DataModelFieldBase']
__all__ = ["ConstraintsBase", "DataModel", "DataModelFieldBase"]

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
from pathlib import Path
from __future__ import annotations

from typing import (
TYPE_CHECKING,
Any,
ClassVar,
DefaultDict,
Dict,
List,
Optional,
Sequence,
Set,
@@ -22,37 +21,42 @@
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED
from datamodel_code_generator.model.imports import IMPORT_DATACLASS, IMPORT_FIELD
from datamodel_code_generator.model.pydantic.base_model import Constraints
from datamodel_code_generator.model.types import DataTypeManager as _DataTypeManager
from datamodel_code_generator.model.types import type_map_factory
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import DataType, StrictTypes, Types, chain_as_tuple

if TYPE_CHECKING:
from collections import defaultdict
from pathlib import Path

from datamodel_code_generator.reference import Reference

from datamodel_code_generator.model.pydantic.base_model import Constraints # noqa: TC001


def _has_field_assignment(field: DataModelFieldBase) -> bool:
return bool(field.field) or not (
field.required
or (field.represented_default == 'None' and field.strip_default_none)
field.required or (field.represented_default == "None" and field.strip_default_none)
)


class DataClass(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'dataclass.jinja2'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_DATACLASS,)
TEMPLATE_FILE_PATH: ClassVar[str] = "dataclass.jinja2"
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_DATACLASS,) # noqa: UP006

def __init__(
def __init__( # noqa: PLR0913
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
fields: list[DataModelFieldBase],
decorators: list[str] | None = None,
base_classes: list[Reference] | None = None,
custom_base_class: str | None = None,
custom_template_dir: Path | None = None,
extra_template_data: defaultdict[str, dict[str, Any]] | None = None,
methods: list[str] | None = None,
path: Path | None = None,
description: str | None = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
@@ -75,21 +79,21 @@ def __init__(


class DataModelField(DataModelFieldBase):
_FIELD_KEYS: ClassVar[Set[str]] = {
'default_factory',
'init',
'repr',
'hash',
'compare',
'metadata',
'kw_only',
_FIELD_KEYS: ClassVar[Set[str]] = { # noqa: UP006
"default_factory",
"init",
"repr",
"hash",
"compare",
"metadata",
"kw_only",
}
constraints: Optional[Constraints] = None
constraints: Optional[Constraints] = None # noqa: UP045

@property
def imports(self) -> Tuple[Import, ...]:
def imports(self) -> tuple[Import, ...]:
field = self.field
if field and field.startswith('field('):
if field and field.startswith("field("):
return chain_as_tuple(super().imports, (IMPORT_FIELD,))
return super().imports

@@ -99,60 +103,55 @@ def self_reference(self) -> bool: # pragma: no cover
}

@property
def field(self) -> Optional[str]:
def field(self) -> str | None:
"""for backwards compatibility"""
result = str(self)
if result == '':
if not result:
return None

return result

def __str__(self) -> str:
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._FIELD_KEYS
}
data: dict[str, Any] = {k: v for k, v in self.extras.items() if k in self._FIELD_KEYS}

if self.default != UNDEFINED and self.default is not None:
data['default'] = self.default
data["default"] = self.default

if self.required:
data = {
k: v
for k, v in data.items()
if k
not in (
'default',
'default_factory',
)
not in {
"default",
"default_factory",
}
}

if not data:
return ''
return ""

if len(data) == 1 and 'default' in data:
default = data['default']
if len(data) == 1 and "default" in data:
default = data["default"]

if isinstance(default, (list, dict)):
return f'field(default_factory=lambda :{repr(default)})'
return f"field(default_factory=lambda :{default!r})"
return repr(default)
kwargs = [
f'{k}={v if k == "default_factory" else repr(v)}' for k, v in data.items()
]
return f'field({", ".join(kwargs)})'
kwargs = [f"{k}={v if k == 'default_factory' else repr(v)}" for k, v in data.items()]
return f"field({', '.join(kwargs)})"


class DataTypeManager(_DataTypeManager):
def __init__(
def __init__( # noqa: PLR0913, PLR0917
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
use_standard_collections: bool = False, # noqa: FBT001, FBT002
use_generic_container_types: bool = False, # noqa: FBT001, FBT002
strict_types: Sequence[StrictTypes] | None = None,
use_non_positive_negative_number_constrained_types: bool = False, # noqa: FBT001, FBT002
use_union_operator: bool = False, # noqa: FBT001, FBT002
use_pendulum: bool = False, # noqa: FBT001, FBT002
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
):
) -> None:
super().__init__(
python_version,
use_standard_collections,
@@ -175,7 +174,7 @@ def __init__(
else {}
)

self.type_map: Dict[Types, DataType] = {
self.type_map: dict[Types, DataType] = {
**type_map_factory(self.data_type),
**datetime_map,
}
Original file line number Diff line number Diff line change
@@ -1,20 +1,24 @@
from __future__ import annotations

from pathlib import Path
from typing import Any, ClassVar, DefaultDict, Dict, List, Optional, Tuple
from typing import TYPE_CHECKING, Any, ClassVar, Optional, Tuple

from datamodel_code_generator.imports import IMPORT_ANY, IMPORT_ENUM, Import
from datamodel_code_generator.model import DataModel, DataModelFieldBase
from datamodel_code_generator.model.base import UNDEFINED, BaseClassDataType
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import DataType, Types

_INT: str = 'int'
_FLOAT: str = 'float'
_BYTES: str = 'bytes'
_STR: str = 'str'
if TYPE_CHECKING:
from collections import defaultdict
from pathlib import Path

SUBCLASS_BASE_CLASSES: Dict[Types, str] = {
from datamodel_code_generator.reference import Reference

_INT: str = "int"
_FLOAT: str = "float"
_BYTES: str = "bytes"
_STR: str = "str"

SUBCLASS_BASE_CLASSES: dict[Types, str] = {
Types.int32: _INT,
Types.int64: _INT,
Types.integer: _INT,
@@ -27,28 +31,28 @@


class Enum(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'Enum.jinja2'
BASE_CLASS: ClassVar[str] = 'enum.Enum'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_ENUM,)
TEMPLATE_FILE_PATH: ClassVar[str] = "Enum.jinja2"
BASE_CLASS: ClassVar[str] = "enum.Enum"
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = (IMPORT_ENUM,) # noqa: UP006

def __init__(
def __init__( # noqa: PLR0913
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
type_: Optional[Types] = None,
fields: list[DataModelFieldBase],
decorators: list[str] | None = None,
base_classes: list[Reference] | None = None,
custom_base_class: str | None = None,
custom_template_dir: Path | None = None,
extra_template_data: defaultdict[str, dict[str, Any]] | None = None,
methods: list[str] | None = None,
path: Path | None = None,
description: str | None = None,
type_: Types | None = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
):
) -> None:
super().__init__(
reference=reference,
fields=fields,
@@ -68,7 +72,7 @@ def __init__(
if not base_classes and type_:
base_class = SUBCLASS_BASE_CLASSES.get(type_)
if base_class:
self.base_classes: List[BaseClassDataType] = [
self.base_classes: list[BaseClassDataType] = [
BaseClassDataType(type=base_class),
*self.base_classes,
]
@@ -80,14 +84,14 @@ def get_data_type(cls, types: Types, **kwargs: Any) -> DataType:
def get_member(self, field: DataModelFieldBase) -> Member:
return Member(self, field)

def find_member(self, value: Any) -> Optional[Member]:
def find_member(self, value: Any) -> Member | None:
repr_value = repr(value)
# Remove surrounding quotes from the string representation
str_value = str(value).strip('\'"')
str_value = str(value).strip("'\"")

for field in self.fields:
# Remove surrounding quotes from field default value
field_default = (field.default or '').strip('\'"')
field_default = (field.default or "").strip("'\"")

# Compare values after removing quotes
if field_default == str_value:
@@ -100,15 +104,15 @@ def find_member(self, value: Any) -> Optional[Member]:
return None

@property
def imports(self) -> Tuple[Import, ...]:
def imports(self) -> tuple[Import, ...]:
return tuple(i for i in super().imports if i != IMPORT_ANY)


class Member:
def __init__(self, enum: Enum, field: DataModelFieldBase) -> None:
self.enum: Enum = enum
self.field: DataModelFieldBase = field
self.alias: Optional[str] = None
self.alias: Optional[str] = None # noqa: UP045

def __repr__(self) -> str:
return f'{self.alias or self.enum.name}.{self.field.name}'
return f"{self.alias or self.enum.name}.{self.field.name}"
15 changes: 15 additions & 0 deletions src/datamodel_code_generator/model/imports.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from __future__ import annotations

from datamodel_code_generator.imports import Import

IMPORT_DATACLASS = Import.from_full_path("dataclasses.dataclass")
IMPORT_FIELD = Import.from_full_path("dataclasses.field")
IMPORT_CLASSVAR = Import.from_full_path("typing.ClassVar")
IMPORT_TYPED_DICT = Import.from_full_path("typing.TypedDict")
IMPORT_TYPED_DICT_BACKPORT = Import.from_full_path("typing_extensions.TypedDict")
IMPORT_NOT_REQUIRED = Import.from_full_path("typing.NotRequired")
IMPORT_NOT_REQUIRED_BACKPORT = Import.from_full_path("typing_extensions.NotRequired")
IMPORT_MSGSPEC_STRUCT = Import.from_full_path("msgspec.Struct")
IMPORT_MSGSPEC_FIELD = Import.from_full_path("msgspec.field")
IMPORT_MSGSPEC_META = Import.from_full_path("msgspec.Meta")
IMPORT_MSGSPEC_CONVERT = Import.from_full_path("msgspec.convert")
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
from __future__ import annotations

from functools import wraps
from pathlib import Path
from typing import (
TYPE_CHECKING,
Any,
ClassVar,
DefaultDict,
Dict,
List,
Optional,
Sequence,
Set,
Tuple,
Type,
TypeVar,
)

@@ -38,7 +36,6 @@
from datamodel_code_generator.model.rootmodel import RootModel as _RootModel
from datamodel_code_generator.model.types import DataTypeManager as _DataTypeManager
from datamodel_code_generator.model.types import type_map_factory
from datamodel_code_generator.reference import Reference
from datamodel_code_generator.types import (
DataType,
StrictTypes,
@@ -47,36 +44,39 @@
get_optional_type,
)

if TYPE_CHECKING:
from collections import defaultdict
from pathlib import Path

from datamodel_code_generator.reference import Reference


def _has_field_assignment(field: DataModelFieldBase) -> bool:
return not (
field.required
or (field.represented_default == 'None' and field.strip_default_none)
)
return not (field.required or (field.represented_default == "None" and field.strip_default_none))


DataModelFieldBaseT = TypeVar('DataModelFieldBaseT', bound=DataModelFieldBase)
DataModelFieldBaseT = TypeVar("DataModelFieldBaseT", bound=DataModelFieldBase)


def import_extender(cls: Type[DataModelFieldBaseT]) -> Type[DataModelFieldBaseT]:
original_imports: property = getattr(cls, 'imports', None) # type: ignore
def import_extender(cls: type[DataModelFieldBaseT]) -> type[DataModelFieldBaseT]:
original_imports: property = cls.imports

@wraps(original_imports.fget) # type: ignore
def new_imports(self: DataModelFieldBaseT) -> Tuple[Import, ...]:
@wraps(original_imports.fget) # pyright: ignore[reportArgumentType]
def new_imports(self: DataModelFieldBaseT) -> tuple[Import, ...]:
extra_imports = []
field = self.field
# TODO: Improve field detection
if field and field.startswith('field('):
if field and field.startswith("field("):
extra_imports.append(IMPORT_MSGSPEC_FIELD)
if self.field and 'lambda: convert' in self.field:
if self.field and "lambda: convert" in self.field:
extra_imports.append(IMPORT_MSGSPEC_CONVERT)
if self.annotated:
extra_imports.append(IMPORT_MSGSPEC_META)
if self.extras.get('is_classvar'):
if self.extras.get("is_classvar"):
extra_imports.append(IMPORT_CLASSVAR)
return chain_as_tuple(original_imports.fget(self), extra_imports) # type: ignore
return chain_as_tuple(original_imports.fget(self), extra_imports) # pyright: ignore[reportOptionalCall]

setattr(cls, 'imports', property(new_imports))
cls.imports = property(new_imports) # pyright: ignore[reportAttributeAccessIssue]
return cls


@@ -85,23 +85,23 @@ class RootModel(_RootModel):


class Struct(DataModel):
TEMPLATE_FILE_PATH: ClassVar[str] = 'msgspec.jinja2'
BASE_CLASS: ClassVar[str] = 'msgspec.Struct'
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = ()
TEMPLATE_FILE_PATH: ClassVar[str] = "msgspec.jinja2"
BASE_CLASS: ClassVar[str] = "msgspec.Struct"
DEFAULT_IMPORTS: ClassVar[Tuple[Import, ...]] = () # noqa: UP006

def __init__(
def __init__( # noqa: PLR0913
self,
*,
reference: Reference,
fields: List[DataModelFieldBase],
decorators: Optional[List[str]] = None,
base_classes: Optional[List[Reference]] = None,
custom_base_class: Optional[str] = None,
custom_template_dir: Optional[Path] = None,
extra_template_data: Optional[DefaultDict[str, Dict[str, Any]]] = None,
methods: Optional[List[str]] = None,
path: Optional[Path] = None,
description: Optional[str] = None,
fields: list[DataModelFieldBase],
decorators: list[str] | None = None,
base_classes: list[Reference] | None = None,
custom_base_class: str | None = None,
custom_template_dir: Path | None = None,
extra_template_data: defaultdict[str, dict[str, Any]] | None = None,
methods: list[str] | None = None,
path: Path | None = None,
description: str | None = None,
default: Any = UNDEFINED,
nullable: bool = False,
keyword_only: bool = False,
@@ -121,133 +121,118 @@ def __init__(
nullable=nullable,
keyword_only=keyword_only,
)
self.extra_template_data.setdefault('base_class_kwargs', {})
self.extra_template_data.setdefault("base_class_kwargs", {})
if self.keyword_only:
self.add_base_class_kwarg('kw_only', 'True')
self.add_base_class_kwarg("kw_only", "True")

def add_base_class_kwarg(self, name: str, value):
self.extra_template_data['base_class_kwargs'][name] = value
def add_base_class_kwarg(self, name: str, value: str) -> None:
self.extra_template_data["base_class_kwargs"][name] = value


class Constraints(_Constraints):
# To override existing pattern alias
regex: Optional[str] = Field(None, alias='regex')
pattern: Optional[str] = Field(None, alias='pattern')
regex: Optional[str] = Field(None, alias="regex") # noqa: UP045
pattern: Optional[str] = Field(None, alias="pattern") # noqa: UP045


@import_extender
class DataModelField(DataModelFieldBase):
_FIELD_KEYS: ClassVar[Set[str]] = {
'default',
'default_factory',
_FIELD_KEYS: ClassVar[Set[str]] = { # noqa: UP006
"default",
"default_factory",
}
_META_FIELD_KEYS: ClassVar[Set[str]] = {
'title',
'description',
'gt',
'ge',
'lt',
'le',
'multiple_of',
_META_FIELD_KEYS: ClassVar[Set[str]] = { # noqa: UP006
"title",
"description",
"gt",
"ge",
"lt",
"le",
"multiple_of",
# 'min_items', # not supported by msgspec
# 'max_items', # not supported by msgspec
'min_length',
'max_length',
'pattern',
'examples',
"min_length",
"max_length",
"pattern",
"examples",
# 'unique_items', # not supported by msgspec
}
_PARSE_METHOD = 'convert'
_COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {'gt', 'ge', 'lt', 'le', 'multiple_of'}
constraints: Optional[Constraints] = None
_PARSE_METHOD = "convert"
_COMPARE_EXPRESSIONS: ClassVar[Set[str]] = {"gt", "ge", "lt", "le", "multiple_of"} # noqa: UP006
constraints: Optional[Constraints] = None # noqa: UP045

def self_reference(self) -> bool: # pragma: no cover
return isinstance(self.parent, Struct) and self.parent.reference.path in {
d.reference.path for d in self.data_type.all_data_types if d.reference
}

def process_const(self) -> None:
if 'const' not in self.extras:
return None
if "const" not in self.extras:
return
self.const = True
self.nullable = False
const = self.extras['const']
if self.data_type.type == 'str' and isinstance(
const, str
): # pragma: no cover # Literal supports only str
const = self.extras["const"]
if self.data_type.type == "str" and isinstance(const, str): # pragma: no cover # Literal supports only str
self.data_type = self.data_type.__class__(literals=[const])

def _get_strict_field_constraint_value(self, constraint: str, value: Any) -> Any:
if value is None or constraint not in self._COMPARE_EXPRESSIONS:
return value

if any(
data_type.type == 'float' for data_type in self.data_type.all_data_types
):
if any(data_type.type == "float" for data_type in self.data_type.all_data_types):
return float(value)
return int(value)

@property
def field(self) -> Optional[str]:
def field(self) -> str | None:
"""for backwards compatibility"""
result = str(self)
if result == '':
if not result:
return None

return result

def __str__(self) -> str:
data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._FIELD_KEYS
}
data: dict[str, Any] = {k: v for k, v in self.extras.items() if k in self._FIELD_KEYS}
if self.alias:
data['name'] = self.alias
data["name"] = self.alias

if self.default != UNDEFINED and self.default is not None:
data['default'] = self.default
data["default"] = self.default
elif not self.required:
data['default'] = None
data["default"] = None

if self.required:
data = {
k: v
for k, v in data.items()
if k
not in (
'default',
'default_factory',
)
not in {
"default",
"default_factory",
}
}
elif self.default and 'default_factory' not in data:
elif self.default and "default_factory" not in data:
default_factory = self._get_default_as_struct_model()
if default_factory is not None:
data.pop('default')
data['default_factory'] = default_factory
data.pop("default")
data["default_factory"] = default_factory

if not data:
return ''
return ""

if len(data) == 1 and 'default' in data:
return repr(data['default'])
if len(data) == 1 and "default" in data:
return repr(data["default"])

kwargs = [
f'{k}={v if k == "default_factory" else repr(v)}' for k, v in data.items()
]
return f'field({", ".join(kwargs)})'
kwargs = [f"{k}={v if k == 'default_factory' else repr(v)}" for k, v in data.items()]
return f"field({', '.join(kwargs)})"

@property
def annotated(self) -> Optional[str]:
def annotated(self) -> str | None:
if not self.use_annotated: # pragma: no cover
return None

data: Dict[str, Any] = {
k: v for k, v in self.extras.items() if k in self._META_FIELD_KEYS
}
if (
self.constraints is not None
and not self.self_reference()
and not self.data_type.strict
):
data: dict[str, Any] = {k: v for k, v in self.extras.items() if k in self._META_FIELD_KEYS}
if self.constraints is not None and not self.self_reference() and not self.data_type.strict:
data = {
**data,
**{
@@ -257,59 +242,60 @@ def annotated(self) -> Optional[str]:
},
}

meta_arguments = sorted(
f'{k}={repr(v)}' for k, v in data.items() if v is not None
)
meta_arguments = sorted(f"{k}={v!r}" for k, v in data.items() if v is not None)
if not meta_arguments:
return None

meta = f'Meta({", ".join(meta_arguments)})'
meta = f"Meta({', '.join(meta_arguments)})"

if not self.required and not self.extras.get('is_classvar'):
if not self.required and not self.extras.get("is_classvar"):
type_hint = self.data_type.type_hint
annotated_type = f'Annotated[{type_hint}, {meta}]'
annotated_type = f"Annotated[{type_hint}, {meta}]"
return get_optional_type(annotated_type, self.data_type.use_union_operator)

annotated_type = f'Annotated[{self.type_hint}, {meta}]'
if self.extras.get('is_classvar'):
annotated_type = f'ClassVar[{annotated_type}]'
annotated_type = f"Annotated[{self.type_hint}, {meta}]"
if self.extras.get("is_classvar"):
annotated_type = f"ClassVar[{annotated_type}]"

return annotated_type

def _get_default_as_struct_model(self) -> Optional[str]:
def _get_default_as_struct_model(self) -> str | None:
for data_type in self.data_type.data_types or (self.data_type,):
# TODO: Check nested data_types
if data_type.is_dict or self.data_type.is_union:
# TODO: Parse Union and dict model for default
continue # pragma: no cover
elif data_type.is_list and len(data_type.data_types) == 1:
data_type = data_type.data_types[0]
if data_type.is_list and len(data_type.data_types) == 1:
data_type_child = data_type.data_types[0]
if ( # pragma: no cover
data_type.reference
and (
isinstance(data_type.reference.source, Struct)
or isinstance(data_type.reference.source, RootModel)
)
data_type_child.reference
and (isinstance(data_type_child.reference.source, (Struct, RootModel)))
and isinstance(self.default, list)
):
return f'lambda: {self._PARSE_METHOD}({repr(self.default)}, type=list[{data_type.alias or data_type.reference.source.class_name}])'
return (
f"lambda: {self._PARSE_METHOD}({self.default!r}, "
f"type=list[{data_type_child.alias or data_type_child.reference.source.class_name}])"
)
elif data_type.reference and isinstance(data_type.reference.source, Struct):
return f'lambda: {self._PARSE_METHOD}({repr(self.default)}, type={data_type.alias or data_type.reference.source.class_name})'
return (
f"lambda: {self._PARSE_METHOD}({self.default!r}, "
f"type={data_type.alias or data_type.reference.source.class_name})"
)
return None


class DataTypeManager(_DataTypeManager):
def __init__(
def __init__( # noqa: PLR0913, PLR0917
self,
python_version: PythonVersion = PythonVersion.PY_38,
use_standard_collections: bool = False,
use_generic_container_types: bool = False,
strict_types: Optional[Sequence[StrictTypes]] = None,
use_non_positive_negative_number_constrained_types: bool = False,
use_union_operator: bool = False,
use_pendulum: bool = False,
use_standard_collections: bool = False, # noqa: FBT001, FBT002
use_generic_container_types: bool = False, # noqa: FBT001, FBT002
strict_types: Sequence[StrictTypes] | None = None,
use_non_positive_negative_number_constrained_types: bool = False, # noqa: FBT001, FBT002
use_union_operator: bool = False, # noqa: FBT001, FBT002
use_pendulum: bool = False, # noqa: FBT001, FBT002
target_datetime_class: DatetimeClassType = DatetimeClassType.Datetime,
):
) -> None:
super().__init__(
python_version,
use_standard_collections,
@@ -332,7 +318,7 @@ def __init__(
else {}
)

self.type_map: Dict[Types, DataType] = {
self.type_map: dict[Types, DataType] = {
**type_map_factory(self.data_type),
**datetime_map,
}
34 changes: 34 additions & 0 deletions src/datamodel_code_generator/model/pydantic/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
from __future__ import annotations

from typing import Iterable, Optional

from pydantic import BaseModel as _BaseModel

from .base_model import BaseModel, DataModelField
from .custom_root_type import CustomRootType
from .dataclass import DataClass
from .types import DataTypeManager


def dump_resolve_reference_action(class_names: Iterable[str]) -> str:
return "\n".join(f"{class_name}.update_forward_refs()" for class_name in class_names)


class Config(_BaseModel):
extra: Optional[str] = None # noqa: UP045
title: Optional[str] = None # noqa: UP045
allow_population_by_field_name: Optional[bool] = None # noqa: UP045
allow_extra_fields: Optional[bool] = None # noqa: UP045
allow_mutation: Optional[bool] = None # noqa: UP045
arbitrary_types_allowed: Optional[bool] = None # noqa: UP045
orm_mode: Optional[bool] = None # noqa: UP045


__all__ = [
"BaseModel",
"CustomRootType",
"DataClass",
"DataModelField",
"DataTypeManager",
"dump_resolve_reference_action",
]
Loading