跳转到内容

TypeAdapter

基类:Generic[T]

使用文档

TypeAdapter

类型适配器提供了一种灵活的方式,可以根据 Python 类型执行验证和序列化。

TypeAdapter 实例公开了 BaseModel 实例方法的一些功能,适用于没有此类方法的类型(例如 dataclass、原始类型等)。

注意: TypeAdapter 实例不是类型,不能用作字段的类型注解。

参数

名称 类型 描述 默认值
type Any

TypeAdapter 关联的类型。

必需
config ConfigDict | None

TypeAdapter 的配置,应为符合 ConfigDict 的字典。

注意

如果正在使用的类型有自己的配置且无法被覆盖(例如:BaseModelTypedDictdataclass),则在实例化 TypeAdapter 时无法提供配置。在这种情况下,将引发 type-adapter-config-unused 错误。

None
_parent_depth int

搜索 父帧 的深度。在构建 schema 期间解析前向注解时,通过查找此帧的全局变量和局部变量来使用此帧。默认为 2,这将导致 TypeAdapter 实例化时的帧。

注意

此参数以下划线命名,以暗示其私有性质并阻止使用。它可能会在次要版本中弃用,因此我们仅建议在您对行为/支持的潜在变化感到满意时使用它。它的默认值为 2,因为在内部,TypeAdapter 类会再次调用以获取帧。

2
模块 str | None

如果提供,传递给插件的模块。

None

属性

名称 类型 描述
core_schema CoreSchema

该类型的核心 schema。

验证器 SchemaValidator | PluggableSchemaValidator

该类型的 schema 验证器。

序列化器 SchemaSerializer

该类型的 schema 序列化器。

pydantic_complete bool

该类型的核心 schema 是否成功构建。

mypy 的兼容性

根据所使用的类型,在实例化 TypeAdapter 时,mypy 可能会引发错误。作为一种变通方法,您可以明确注解您的变量。

from typing import Union

from pydantic import TypeAdapter

ta: TypeAdapter[Union[str, int]] = TypeAdapter(Union[str, int])  # type: ignore[arg-type]
命名空间管理细微之处和实现细节

在此,我们收集了一些关于命名空间管理和与 BaseModel 的细微差异的注释。

BaseModel 使用它自己的 __module__ 来找出它的定义位置,然后在其全局变量中查找符号以解析前向引用。另一方面,TypeAdapter 可以用任意对象初始化,这些对象可能不是类型,因此没有 __module__ 可用。因此,我们转而查看父堆栈帧中的全局变量。

预期传递给此函数的 ns_resolver 将具有我们正在适配的类型的正确命名空间。有关构建此命名空间的各种方法,请参阅 TypeAdapter.__init__TypeAdapter.rebuild 的源代码。

这适用于在模块中调用此函数的情况,该模块在其作用域中包含前向引用的目标,但并非总是适用于更复杂的情况。

例如,请看以下内容:

a.py
IntList = list[int]
OuterDict = dict[str, 'IntList']
b.py
from a import OuterDict

from pydantic import TypeAdapter

IntList = int  # replaces the symbol the forward reference is looking for
v = TypeAdapter(OuterDict)
v({'x': 1})  # should fail but doesn't

如果 OuterDict 是一个 BaseModel,这将起作用,因为它将在 a.py 命名空间中解析前向引用。但是 TypeAdapter(OuterDict) 无法确定 OuterDict 来自哪个模块。

换句话说,关于所有前向引用都存在于我们被调用的模块中的假设,技术上并非总是如此。尽管大多数时候它都是并且对于递归模型等工作正常,但 BaseModel 的行为也不是完美的,并且也可能以类似的方式中断,因此两者之间没有对错。

但至少这种行为与 BaseModel 的行为存在微妙的差异。

pydantic/type_adapter.py 中的源代码
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
def __init__(
    self,
    type: Any,
    *,
    config: ConfigDict | None = None,
    _parent_depth: int = 2,
    module: str | None = None,
) -> None:
    if _type_has_config(type) and config is not None:
        raise PydanticUserError(
            'Cannot use `config` when the type is a BaseModel, dataclass or TypedDict.'
            ' These types can have their own config and setting the config via the `config`'
            ' parameter to TypeAdapter will not override it, thus the `config` you passed to'
            ' TypeAdapter becomes meaningless, which is probably not what you want.',
            code='type-adapter-config-unused',
        )

    self._type = type
    self._config = config
    self._parent_depth = _parent_depth
    self.pydantic_complete = False

    parent_frame = self._fetch_parent_frame()
    if isinstance(type, types.FunctionType):
        # Special case functions, which are *not* pushed to the `NsResolver` stack and without this special case
        # would only have access to the parent namespace where the `TypeAdapter` was instantiated (if the function is defined
        # in another module, we need to look at that module's globals).
        if parent_frame is not None:
            # `f_locals` is the namespace where the type adapter was instantiated (~ to `f_globals` if at the module level):
            parent_ns = parent_frame.f_locals
        else:  # pragma: no cover
            parent_ns = None
        globalns, localns = _namespace_utils.ns_for_function(
            type,
            parent_namespace=parent_ns,
        )
        parent_namespace = None
    else:
        if parent_frame is not None:
            globalns = parent_frame.f_globals
            # Do not provide a local ns if the type adapter happens to be instantiated at the module level:
            localns = parent_frame.f_locals if parent_frame.f_locals is not globalns else {}
        else:  # pragma: no cover
            globalns = {}
            localns = {}
        parent_namespace = localns

    self._module_name = module or cast(str, globalns.get('__name__', ''))
    self._init_core_attrs(
        ns_resolver=_namespace_utils.NsResolver(
            namespaces_tuple=_namespace_utils.NamespacesTuple(locals=localns, globals=globalns),
            parent_namespace=parent_namespace,
        ),
        force=False,
    )

rebuild

rebuild(
    *,
    force: bool = False,
    raise_errors: bool = True,
    _parent_namespace_depth: int = 2,
    _types_namespace: MappingNamespace | None = None
) -> bool | None

尝试为适配器的类型重建 pydantic-core schema。

当其中一个注解是 ForwardRef 且在最初尝试构建模式时无法解析,并且自动重建失败时,这可能是必要的。

参数

名称 类型 描述 默认值
force bool

是否强制重建类型适配器的 schema,默认为 False

False
raise_errors bool

是否引发错误,默认为 True

True
_parent_namespace_depth int

搜索 父帧 的深度。在 schema 重建期间解析前向注解时,通过查找此帧的局部变量来使用此帧。默认为 2,这将导致调用该方法的帧。

2
_types_namespace MappingNamespace | None

要使用的显式类型命名空间,而不是使用父帧的局部命名空间。默认为 None

None

返回

类型 描述
bool | None

如果模式已“完成”且不需要重建,则返回 None

bool | None

如果 确实 需要重建,则如果重建成功返回 True,否则返回 False

pydantic/type_adapter.py 中的源代码
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
def rebuild(
    self,
    *,
    force: bool = False,
    raise_errors: bool = True,
    _parent_namespace_depth: int = 2,
    _types_namespace: _namespace_utils.MappingNamespace | None = None,
) -> bool | None:
    """Try to rebuild the pydantic-core schema for the adapter's type.

    This may be necessary when one of the annotations is a ForwardRef which could not be resolved during
    the initial attempt to build the schema, and automatic rebuilding fails.

    Args:
        force: Whether to force the rebuilding of the type adapter's schema, defaults to `False`.
        raise_errors: Whether to raise errors, defaults to `True`.
        _parent_namespace_depth: Depth at which to search for the [parent frame][frame-objects]. This
            frame is used when resolving forward annotations during schema rebuilding, by looking for
            the locals of this frame. Defaults to 2, which will result in the frame where the method
            was called.
        _types_namespace: An explicit types namespace to use, instead of using the local namespace
            from the parent frame. Defaults to `None`.

    Returns:
        Returns `None` if the schema is already "complete" and rebuilding was not required.
        If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`.
    """
    if not force and self.pydantic_complete:
        return None

    if _types_namespace is not None:
        rebuild_ns = _types_namespace
    elif _parent_namespace_depth > 0:
        rebuild_ns = _typing_extra.parent_frame_namespace(parent_depth=_parent_namespace_depth, force=True) or {}
    else:
        rebuild_ns = {}

    # we have to manually fetch globals here because there's no type on the stack of the NsResolver
    # and so we skip the globalns = get_module_ns_of(typ) call that would normally happen
    globalns = sys._getframe(max(_parent_namespace_depth - 1, 1)).f_globals
    ns_resolver = _namespace_utils.NsResolver(
        namespaces_tuple=_namespace_utils.NamespacesTuple(locals=rebuild_ns, globals=globalns),
        parent_namespace=rebuild_ns,
    )
    return self._init_core_attrs(ns_resolver=ns_resolver, force=True, raise_errors=raise_errors)

validate_python

validate_python(
    object: Any,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    from_attributes: bool | None = None,
    context: Any | None = None,
    experimental_allow_partial: (
        bool | Literal["off", "on", "trailing-strings"]
    ) = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T

根据模型验证 Python 对象。

参数

名称 类型 描述 默认值
对象 Any

要根据模型验证的 Python 对象。

必需
strict bool | None

是否严格检查类型。

None
extra ExtraValues | None

在模型验证期间,是忽略、允许还是禁止额外数据。有关详细信息,请参阅 extra 配置值

None
from_attributes bool | None

是否从对象属性中提取数据。

None
context Any | None

传递给验证器的附加上下文。

None
experimental_allow_partial bool | Literal['off', 'on', 'trailing-strings']

实验性 是否启用 部分验证,例如处理流。 * False / 'off':默认行为,不进行部分验证。 * True / 'on':启用部分验证。 * 'trailing-strings':启用部分验证并允许输入中包含尾随字符串。

False
by_alias bool | None

在根据提供的输入数据进行验证时是否使用字段的别名。

None
by_name bool | None

在根据提供的输入数据进行验证时是否使用字段的名称。

None

注意

TypeAdapter 与 Pydantic dataclass 一起使用时,不支持使用 from_attributes 参数。

返回

类型 描述
T

已验证的对象。

pydantic/type_adapter.py 中的源代码
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
def validate_python(
    self,
    object: Any,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    from_attributes: bool | None = None,
    context: Any | None = None,
    experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T:
    """Validate a Python object against the model.

    Args:
        object: The Python object to validate against the model.
        strict: Whether to strictly check types.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        from_attributes: Whether to extract data from object attributes.
        context: Additional context to pass to the validator.
        experimental_allow_partial: **Experimental** whether to enable
            [partial validation](../concepts/experimental.md#partial-validation), e.g. to process streams.
            * False / 'off': Default behavior, no partial validation.
            * True / 'on': Enable partial validation.
            * 'trailing-strings': Enable partial validation and allow trailing strings in the input.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    !!! note
        When using `TypeAdapter` with a Pydantic `dataclass`, the use of the `from_attributes`
        argument is not supported.

    Returns:
        The validated object.
    """
    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return self.validator.validate_python(
        object,
        strict=strict,
        extra=extra,
        from_attributes=from_attributes,
        context=context,
        allow_partial=experimental_allow_partial,
        by_alias=by_alias,
        by_name=by_name,
    )

validate_json

validate_json(
    data: str | bytes | bytearray,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    experimental_allow_partial: (
        bool | Literal["off", "on", "trailing-strings"]
    ) = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T

使用文档

JSON 解析

根据模型验证 JSON 字符串或字节。

参数

名称 类型 描述 默认值
data str | bytes | bytearray

要根据模型验证的 JSON 数据。

必需
strict bool | None

是否严格检查类型。

None
extra ExtraValues | None

在模型验证期间,是忽略、允许还是禁止额外数据。有关详细信息,请参阅 extra 配置值

None
context Any | None

验证期间使用的附加上下文。

None
experimental_allow_partial bool | Literal['off', 'on', 'trailing-strings']

实验性 是否启用 部分验证,例如处理流。 * False / 'off':默认行为,不进行部分验证。 * True / 'on':启用部分验证。 * 'trailing-strings':启用部分验证并允许输入中包含尾随字符串。

False
by_alias bool | None

在根据提供的输入数据进行验证时是否使用字段的别名。

None
by_name bool | None

在根据提供的输入数据进行验证时是否使用字段的名称。

None

返回

类型 描述
T

已验证的对象。

pydantic/type_adapter.py 中的源代码
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
def validate_json(
    self,
    data: str | bytes | bytearray,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T:
    """!!! abstract "Usage Documentation"
        [JSON Parsing](../concepts/json.md#json-parsing)

    Validate a JSON string or bytes against the model.

    Args:
        data: The JSON data to validate against the model.
        strict: Whether to strictly check types.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Additional context to use during validation.
        experimental_allow_partial: **Experimental** whether to enable
            [partial validation](../concepts/experimental.md#partial-validation), e.g. to process streams.
            * False / 'off': Default behavior, no partial validation.
            * True / 'on': Enable partial validation.
            * 'trailing-strings': Enable partial validation and allow trailing strings in the input.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated object.
    """
    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return self.validator.validate_json(
        data,
        strict=strict,
        extra=extra,
        context=context,
        allow_partial=experimental_allow_partial,
        by_alias=by_alias,
        by_name=by_name,
    )

validate_strings

validate_strings(
    obj: Any,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    experimental_allow_partial: (
        bool | Literal["off", "on", "trailing-strings"]
    ) = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T

根据模型验证包含字符串数据的对象。

参数

名称 类型 描述 默认值
obj Any

包含要验证的字符串数据的对象。

必需
strict bool | None

是否严格检查类型。

None
extra ExtraValues | None

在模型验证期间,是忽略、允许还是禁止额外数据。有关详细信息,请参阅 extra 配置值

None
context Any | None

验证期间使用的附加上下文。

None
experimental_allow_partial bool | Literal['off', 'on', 'trailing-strings']

实验性 是否启用 部分验证,例如处理流。 * False / 'off':默认行为,不进行部分验证。 * True / 'on':启用部分验证。 * 'trailing-strings':启用部分验证并允许输入中包含尾随字符串。

False
by_alias bool | None

在根据提供的输入数据进行验证时是否使用字段的别名。

None
by_name bool | None

在根据提供的输入数据进行验证时是否使用字段的名称。

None

返回

类型 描述
T

已验证的对象。

pydantic/type_adapter.py 中的源代码
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
def validate_strings(
    self,
    obj: Any,
    /,
    *,
    strict: bool | None = None,
    extra: ExtraValues | None = None,
    context: Any | None = None,
    experimental_allow_partial: bool | Literal['off', 'on', 'trailing-strings'] = False,
    by_alias: bool | None = None,
    by_name: bool | None = None,
) -> T:
    """Validate object contains string data against the model.

    Args:
        obj: The object contains string data to validate.
        strict: Whether to strictly check types.
        extra: Whether to ignore, allow, or forbid extra data during model validation.
            See the [`extra` configuration value][pydantic.ConfigDict.extra] for details.
        context: Additional context to use during validation.
        experimental_allow_partial: **Experimental** whether to enable
            [partial validation](../concepts/experimental.md#partial-validation), e.g. to process streams.
            * False / 'off': Default behavior, no partial validation.
            * True / 'on': Enable partial validation.
            * 'trailing-strings': Enable partial validation and allow trailing strings in the input.
        by_alias: Whether to use the field's alias when validating against the provided input data.
        by_name: Whether to use the field's name when validating against the provided input data.

    Returns:
        The validated object.
    """
    if by_alias is False and by_name is not True:
        raise PydanticUserError(
            'At least one of `by_alias` or `by_name` must be set to True.',
            code='validate-by-alias-and-name-false',
        )

    return self.validator.validate_strings(
        obj,
        strict=strict,
        extra=extra,
        context=context,
        allow_partial=experimental_allow_partial,
        by_alias=by_alias,
        by_name=by_name,
    )

get_default_value

get_default_value(
    *,
    strict: bool | None = None,
    context: Any | None = None
) -> Some[T] | None

获取包装类型的默认值。

参数

名称 类型 描述 默认值
strict bool | None

是否严格检查类型。

None
context Any | None

传递给验证器的附加上下文。

None

返回

类型 描述
Some[T] | None

如果有默认值,则将其包装在 Some 中,否则为 None。

pydantic/type_adapter.py 中的源代码
549
550
551
552
553
554
555
556
557
558
559
def get_default_value(self, *, strict: bool | None = None, context: Any | None = None) -> Some[T] | None:
    """Get the default value for the wrapped type.

    Args:
        strict: Whether to strictly check types.
        context: Additional context to pass to the validator.

    Returns:
        The default value wrapped in a `Some` if there is one or None if not.
    """
    return self.validator.get_default_value(strict=strict, context=context)

dump_python

dump_python(
    instance: T,
    /,
    *,
    mode: Literal["json", "python"] = "python",
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: (
        bool | Literal["none", "warn", "error"]
    ) = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
    context: Any | None = None,
) -> Any

将适配类型的实例转储为 Python 对象。

参数

名称 类型 描述 默认值
实例 T

要序列化的 Python 对象。

必需
mode Literal['json', 'python']

输出格式。

'python'
include IncEx | None

要包含在输出中的字段。

None
exclude IncEx | None

要从输出中排除的字段。

None
by_alias bool | None

是否对字段名使用别名。

None
exclude_unset bool

是否排除未设置的字段。

False
exclude_defaults bool

是否排除具有默认值的字段。

False
exclude_none bool

是否排除值为 None 的字段。

False
exclude_computed_fields bool

是否排除计算字段。虽然这对于往返序列化可能很有用,但通常建议使用专用的 round_trip 参数。

False
round_trip bool

是否以兼容反序列化的方式输出序列化数据。

False
warnings bool | Literal['none', 'warn', 'error']

如何处理序列化错误。False/"none" 忽略它们,True/"warn" 记录错误,"error" 抛出 PydanticSerializationError

True
fallback Callable[[Any], Any] | None

遇到未知值时调用的函数。如果未提供,则会引发 PydanticSerializationError 错误。

None
serialize_as_any bool

是否使用鸭子类型序列化行为序列化字段。

False
context Any | None

传递给序列化器的附加上下文。

None

返回

类型 描述
Any

序列化的对象。

pydantic/type_adapter.py 中的源代码
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def dump_python(
    self,
    instance: T,
    /,
    *,
    mode: Literal['json', 'python'] = 'python',
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
    context: Any | None = None,
) -> Any:
    """Dump an instance of the adapted type to a Python object.

    Args:
        instance: The Python object to serialize.
        mode: The output format.
        include: Fields to include in the output.
        exclude: Fields to exclude from the output.
        by_alias: Whether to use alias names for field names.
        exclude_unset: Whether to exclude unset fields.
        exclude_defaults: Whether to exclude fields with default values.
        exclude_none: Whether to exclude fields with None values.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: Whether to output the serialized data in a way that is compatible with deserialization.
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.
        context: Additional context to pass to the serializer.

    Returns:
        The serialized object.
    """
    return self.serializer.to_python(
        instance,
        mode=mode,
        by_alias=by_alias,
        include=include,
        exclude=exclude,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
        context=context,
    )

dump_json

dump_json(
    instance: T,
    /,
    *,
    indent: int | None = None,
    ensure_ascii: bool = False,
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: (
        bool | Literal["none", "warn", "error"]
    ) = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
    context: Any | None = None,
) -> bytes

使用文档

JSON 序列化

将适配类型的实例序列化为 JSON。

参数

名称 类型 描述 默认值
实例 T

要序列化的实例。

必需
indent int | None

JSON 缩进的空格数。

None
ensure_ascii bool

如果为 True,则输出保证所有传入的非 ASCII 字符都经过转义。如果为 False(默认值),这些字符将原样输出。

False
include IncEx | None

要包含的字段。

None
exclude IncEx | None

要排除的字段。

None
by_alias bool | None

是否对字段名使用别名。

None
exclude_unset bool

是否排除未设置的字段。

False
exclude_defaults bool

是否排除具有默认值的字段。

False
exclude_none bool

是否排除值为 None 的字段。

False
exclude_computed_fields bool

是否排除计算字段。虽然这对于往返序列化可能很有用,但通常建议使用专用的 round_trip 参数。

False
round_trip bool

是否序列化和反序列化实例以确保往返。

False
warnings bool | Literal['none', 'warn', 'error']

如何处理序列化错误。False/"none" 忽略它们,True/"warn" 记录错误,"error" 抛出 PydanticSerializationError

True
fallback Callable[[Any], Any] | None

遇到未知值时调用的函数。如果未提供,则会引发 PydanticSerializationError 错误。

None
serialize_as_any bool

是否使用鸭子类型序列化行为序列化字段。

False
context Any | None

传递给序列化器的附加上下文。

None

返回

类型 描述
bytes

给定实例的 JSON 表示形式,以字节表示。

pydantic/type_adapter.py 中的源代码
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
def dump_json(
    self,
    instance: T,
    /,
    *,
    indent: int | None = None,
    ensure_ascii: bool = False,
    include: IncEx | None = None,
    exclude: IncEx | None = None,
    by_alias: bool | None = None,
    exclude_unset: bool = False,
    exclude_defaults: bool = False,
    exclude_none: bool = False,
    exclude_computed_fields: bool = False,
    round_trip: bool = False,
    warnings: bool | Literal['none', 'warn', 'error'] = True,
    fallback: Callable[[Any], Any] | None = None,
    serialize_as_any: bool = False,
    context: Any | None = None,
) -> bytes:
    """!!! abstract "Usage Documentation"
        [JSON Serialization](../concepts/json.md#json-serialization)

    Serialize an instance of the adapted type to JSON.

    Args:
        instance: The instance to be serialized.
        indent: Number of spaces for JSON indentation.
        ensure_ascii: If `True`, the output is guaranteed to have all incoming non-ASCII characters escaped.
            If `False` (the default), these characters will be output as-is.
        include: Fields to include.
        exclude: Fields to exclude.
        by_alias: Whether to use alias names for field names.
        exclude_unset: Whether to exclude unset fields.
        exclude_defaults: Whether to exclude fields with default values.
        exclude_none: Whether to exclude fields with a value of `None`.
        exclude_computed_fields: Whether to exclude computed fields.
            While this can be useful for round-tripping, it is usually recommended to use the dedicated
            `round_trip` parameter instead.
        round_trip: Whether to serialize and deserialize the instance to ensure round-tripping.
        warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
            "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
        fallback: A function to call when an unknown value is encountered. If not provided,
            a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
        serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.
        context: Additional context to pass to the serializer.

    Returns:
        The JSON representation of the given instance as bytes.
    """
    return self.serializer.to_json(
        instance,
        indent=indent,
        ensure_ascii=ensure_ascii,
        include=include,
        exclude=exclude,
        by_alias=by_alias,
        exclude_unset=exclude_unset,
        exclude_defaults=exclude_defaults,
        exclude_none=exclude_none,
        exclude_computed_fields=exclude_computed_fields,
        round_trip=round_trip,
        warnings=warnings,
        fallback=fallback,
        serialize_as_any=serialize_as_any,
        context=context,
    )

json_schema

json_schema(
    *,
    by_alias: bool = True,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    union_format: Literal[
        "any_of", "primitive_type_array"
    ] = "any_of",
    schema_generator: type[
        GenerateJsonSchema
    ] = GenerateJsonSchema,
    mode: JsonSchemaMode = "validation"
) -> dict[str, Any]

为适配类型生成 JSON schema。

参数

名称 类型 描述 默认值
by_alias bool

是否对字段名使用别名。

True
ref_template str

用于生成 $ref 字符串的格式字符串。

DEFAULT_REF_TEMPLATE
union_format Literal['any_of', 'primitive_type_array']

组合联合模式时使用的格式。可以是以下之一:

  • 'any_of': 使用 anyOf 关键字组合模式(默认)。
  • 'primitive_type_array': 使用 type 关键字作为字符串数组,包含组合中的每种类型。如果任何模式不是原始类型(stringbooleannullintegernumber)或包含约束/元数据,则回退到 any_of
'any_of'
schema_generator type[GenerateJsonSchema]

通过继承 GenerateJsonSchema 并进行所需修改来覆盖用于生成 JSON schema 的逻辑。

GenerateJsonSchema
mode JsonSchemaMode

生成 schema 的模式。

'validation'
schema_generator type[GenerateJsonSchema]

用于创建 schema 的生成器类。

GenerateJsonSchema
mode JsonSchemaMode

用于 schema 生成的模式。

'validation'

返回

类型 描述
dict[Any, Any]

模型的 JSON schema,表示为字典。

pydantic/type_adapter.py 中的源代码
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
def json_schema(
    self,
    *,
    by_alias: bool = True,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
    schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
    mode: JsonSchemaMode = 'validation',
) -> dict[str, Any]:
    """Generate a JSON schema for the adapted type.

    Args:
        by_alias: Whether to use alias names for field names.
        ref_template: The format string used for generating $ref strings.
        union_format: The format to use when combining schemas from unions together. Can be one of:

            - `'any_of'`: Use the [`anyOf`](https://json-schema.fullstack.org.cn/understanding-json-schema/reference/combining#anyOf)
            keyword to combine schemas (the default).
            - `'primitive_type_array'`: Use the [`type`](https://json-schema.fullstack.org.cn/understanding-json-schema/reference/type)
            keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive
            type (`string`, `boolean`, `null`, `integer` or `number`) or contains constraints/metadata, falls back to
            `any_of`.
        schema_generator: To override the logic used to generate the JSON schema, as a subclass of
            `GenerateJsonSchema` with your desired modifications
        mode: The mode in which to generate the schema.
        schema_generator: The generator class used for creating the schema.
        mode: The mode to use for schema generation.

    Returns:
        The JSON schema for the model as a dictionary.
    """
    schema_generator_instance = schema_generator(
        by_alias=by_alias, ref_template=ref_template, union_format=union_format
    )
    if isinstance(self.core_schema, _mock_val_ser.MockCoreSchema):
        self.core_schema.rebuild()
        assert not isinstance(self.core_schema, _mock_val_ser.MockCoreSchema), 'this is a bug! please report it'
    return schema_generator_instance.generate(self.core_schema, mode=mode)

json_schemas staticmethod

json_schemas(
    inputs: Iterable[
        tuple[
            JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]
        ]
    ],
    /,
    *,
    by_alias: bool = True,
    title: str | None = None,
    description: str | None = None,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    union_format: Literal[
        "any_of", "primitive_type_array"
    ] = "any_of",
    schema_generator: type[
        GenerateJsonSchema
    ] = GenerateJsonSchema,
) -> tuple[
    dict[
        tuple[JsonSchemaKeyT, JsonSchemaMode],
        JsonSchemaValue,
    ],
    JsonSchemaValue,
]

生成一个包含来自多个类型适配器的定义的 JSON schema。

参数

名称 类型 描述 默认值
输入 Iterable[tuple[JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]]]

schema 生成的输入。前两项将构成(第一个)输出映射的键;类型适配器将提供核心 schema,这些 schema 将转换为输出 JSON schema 中的定义。

必需
by_alias bool

是否使用别名。

True
title str | None

schema 的标题。

None
描述 str | None

schema 的描述。

None
ref_template str

用于生成 $ref 字符串的格式字符串。

DEFAULT_REF_TEMPLATE
union_format Literal['any_of', 'primitive_type_array']

组合联合模式时使用的格式。可以是以下之一:

  • 'any_of': 使用 anyOf 关键字组合模式(默认)。
  • 'primitive_type_array': 使用 type 关键字作为字符串数组,包含组合中的每种类型。如果任何模式不是原始类型(stringbooleannullintegernumber)或包含约束/元数据,则回退到 any_of
'any_of'
schema_generator type[GenerateJsonSchema]

用于创建 schema 的生成器类。

GenerateJsonSchema

返回

类型 描述
tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue]

一个元组,其中:

  • 第一个元素是一个字典,其键是 JSON schema 键类型和 JSON 模式的元组,其值是与该输入对对应的 JSON schema。(这些 schema 可能包含对第二个返回元素中定义的定义的 JsonRef 引用。)
  • 第二个元素是一个 JSON schema,包含第一个返回元素中引用的所有定义,以及可选的标题和描述键。
pydantic/type_adapter.py 中的源代码
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
@staticmethod
def json_schemas(
    inputs: Iterable[tuple[JsonSchemaKeyT, JsonSchemaMode, TypeAdapter[Any]]],
    /,
    *,
    by_alias: bool = True,
    title: str | None = None,
    description: str | None = None,
    ref_template: str = DEFAULT_REF_TEMPLATE,
    union_format: Literal['any_of', 'primitive_type_array'] = 'any_of',
    schema_generator: type[GenerateJsonSchema] = GenerateJsonSchema,
) -> tuple[dict[tuple[JsonSchemaKeyT, JsonSchemaMode], JsonSchemaValue], JsonSchemaValue]:
    """Generate a JSON schema including definitions from multiple type adapters.

    Args:
        inputs: Inputs to schema generation. The first two items will form the keys of the (first)
            output mapping; the type adapters will provide the core schemas that get converted into
            definitions in the output JSON schema.
        by_alias: Whether to use alias names.
        title: The title for the schema.
        description: The description for the schema.
        ref_template: The format string used for generating $ref strings.
        union_format: The format to use when combining schemas from unions together. Can be one of:

            - `'any_of'`: Use the [`anyOf`](https://json-schema.fullstack.org.cn/understanding-json-schema/reference/combining#anyOf)
            keyword to combine schemas (the default).
            - `'primitive_type_array'`: Use the [`type`](https://json-schema.fullstack.org.cn/understanding-json-schema/reference/type)
            keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive
            type (`string`, `boolean`, `null`, `integer` or `number`) or contains constraints/metadata, falls back to
            `any_of`.
        schema_generator: The generator class used for creating the schema.

    Returns:
        A tuple where:

            - The first element is a dictionary whose keys are tuples of JSON schema key type and JSON mode, and
                whose values are the JSON schema corresponding to that pair of inputs. (These schemas may have
                JsonRef references to definitions that are defined in the second returned element.)
            - The second element is a JSON schema containing all definitions referenced in the first returned
                element, along with the optional title and description keys.

    """
    schema_generator_instance = schema_generator(
        by_alias=by_alias, ref_template=ref_template, union_format=union_format
    )

    inputs_ = []
    for key, mode, adapter in inputs:
        # This is the same pattern we follow for model json schemas - we attempt a core schema rebuild if we detect a mock
        if isinstance(adapter.core_schema, _mock_val_ser.MockCoreSchema):
            adapter.core_schema.rebuild()
            assert not isinstance(adapter.core_schema, _mock_val_ser.MockCoreSchema), (
                'this is a bug! please report it'
            )
        inputs_.append((key, mode, adapter.core_schema))

    json_schemas_map, definitions = schema_generator_instance.generate_definitions(inputs_)

    json_schema: dict[str, Any] = {}
    if definitions:
        json_schema['$defs'] = definitions
    if title:
        json_schema['title'] = title
    if description:
        json_schema['description'] = description

    return json_schemas_map, json_schema