Skip to main content
Version: 0.14.13

How to Create Renderers for Custom Expectations

danger

This guide only applies to Great Expectations versions 0.13 and above, which make use of the new modular Expectation architecture. If you have implemented a custom Expectation but have not yet migrated it using the new modular patterns, you can still use this guide to implement custom renderers for your Expectation.

This guide will help you implement renderers for your custom Expectations, allowing you to control how your custom Expectations are displayed in Data Docs. Implementing renderers as part of your custom Expectations is not strictly required - if not provided, Great Expectations will render your Expectation using a basic default renderer:

Expectation rendered using default renderer

Prerequisites: This how-to guide assumes you have:

See also this complete custom expectation with renderer example.

Steps

  1. First, decide which renderer types you need to implement.

    Use the following annotated Validation Result as a guide (most major renderer types represented):

    Annotated Validation Result Example

    At minimum, you should implement a renderer with type renderer.prescriptive, which is used to render the human-readable form of your expectation when displaying Expectation Suites and Validation Results. In many cases, this will be the only custom renderer you will have to implement - for the remaining renderer types used on the Validation Results page, Great Expectations provides default renderers that can handle many types of Expectations.

    Renderer Types Overview:

    • renderer.prescriptive: renders human-readable form of Expectation from ExpectationConfiguration
    • renderer.diagnostic.unexpected_statement: renders summary statistics of unexpected values if ExpectationValidationResult includes unexpected_count and element_count
    • renderer.diagnostic.unexpected_table: renders a sample of unexpected values (and in certain cases, counts) in table format, if ExpectationValidationResult includes partial_unexpected_list or partial_unexpected_counts
    • renderer.diagnostic.observed_value: renders the observed value if included in ExpectationValidationResult
  2. Next, implement a renderer with type renderer.prescriptive.

    Declare a class method in your custom Expectation class and decorate it with @renderer(renderer_type="renderer.prescriptive"). The method name is arbitrary, but the convention is to camelcase the renderer type and convert the "renderer" prefix to a suffix (e.g. _prescriptive_renderer). Adding the @render_evaluation_parameter_string decorator allows Expectations that use Evaluation Parameters to render the values of the Evaluation Parameters along with the rest of the output.

    The method should have the following signature, which is shared across renderers:

    @classmethod
    @renderer(renderer_type="renderer.prescriptive")
    @render_evaluation_parameter_string
    def _prescriptive_renderer(
    cls,
    configuration: ExpectationConfiguration = None,
    result: ExpectationValidationResult = None,
    language: str = None,
    runtime_configuration: dict = None,
    **kwargs,
    ) -> List[Union[dict, str, RenderedStringTemplateContent, RenderedTableContent, RenderedBulletListContent,
    RenderedGraphContent, Any]]:
    assert configuration or result, "Must provide renderers either a configuration or result."
    ...

    In general, renderers receive as input either an ExpectationConfiguration (for prescriptive renderers) or an ExpectationValidationResult (for diagnostic renderers) and return a list of rendered elements. The examples below illustrate different ways you might render your expectation - from simple strings to graphs.

Input:

example_expectation_config = ExpectationConfiguration(**{
"expectation_type": "expect_column_value_lengths_to_be_between",
"kwargs": {
"column": "SSL",
"min_value": 1,
"max_value": 11,
"result_format": "COMPLETE"
}
})

Rendered Output:

Simple String Example

Implementation:

class ExpectColumnValueLengthsToBeBetween(ColumnMapExpectation):
...

@classmethod
@renderer(renderer_type="renderer.prescriptive")
@render_evaluation_parameter_string
def _prescriptive_renderer(
cls,
configuration: ExpectationConfiguration = None,
result: ExpectationValidationResult = None,
language: str = None,
runtime_configuration: dict = None,
**kwargs,
) -> List[Union[dict, str, RenderedStringTemplateContent, RenderedTableContent, RenderedBulletListContent,
RenderedGraphContent, Any]]:
runtime_configuration = runtime_configuration or {}
include_column_name = runtime_configuration.get("include_column_name", True)
include_column_name = (
include_column_name if include_column_name is not None else True
)
styling = runtime_configuration.get("styling")
# get params dict with all expected kwargs
params = substitute_none_for_missing(
configuration.kwargs,
[
"column",
"min_value",
"max_value",
"mostly",
"row_condition",
"condition_parser",
"strict_min",
"strict_max",
],
)

# build string template
if (params["min_value"] is None) and (params["max_value"] is None):
template_str = "values may have any length."
else:
at_least_str = (
"greater than"
if params.get("strict_min") is True
else "greater than or equal to"
)
at_most_str = (
"less than" if params.get("strict_max") is True else "less than or equal to"
)

if params["mostly"] is not None:
params["mostly_pct"] = num_to_str(
params["mostly"] * 100, precision=15, no_scientific=True
)

if params["min_value"] is not None and params["max_value"] is not None:
template_str = f"values must be {at_least_str} $min_value and {at_most_str} $max_value characters long, at least $mostly_pct % of the time."

elif params["min_value"] is None:
template_str = f"values must be {at_most_str} $max_value characters long, at least $mostly_pct % of the time."

elif params["max_value"] is None:
template_str = f"values must be {at_least_str} $min_value characters long, at least $mostly_pct % of the time."
else:
if params["min_value"] is not None and params["max_value"] is not None:
template_str = f"values must always be {at_least_str} $min_value and {at_most_str} $max_value characters long."

elif params["min_value"] is None:
template_str = f"values must always be {at_most_str} $max_value characters long."

elif params["max_value"] is None:
template_str = f"values must always be {at_least_str} $min_value characters long."

if include_column_name:
template_str = "$column " + template_str

if params["row_condition"] is not None:
(
conditional_template_str,
conditional_params,
) = parse_row_condition_string_pandas_engine(params["row_condition"])
template_str = conditional_template_str + ", then " + template_str
params.update(conditional_params)

# return simple string
return [Template(template_str).substitute(params)]
  1. If necessary, implement additional renderer types that override the Great Expectations defaults.

    The default implementations are provided below for reference:

    note

    These renderers do not have to have an output for every Expectation.

@classmethod
@renderer(renderer_type="renderer.diagnostic.unexpected_statement")
def _diagnostic_unexpected_statement_renderer(
cls,
configuration=None,
result=None,
language=None,
runtime_configuration=None,
**kwargs,
):
assert result, "Must provide a result object."
success = result.success
result_dict = result.result

if result.exception_info["raised_exception"]:
exception_message_template_str = (
"\n\n$expectation_type raised an exception:\n$exception_message"
)

exception_message = RenderedStringTemplateContent(
**{
"content_block_type": "string_template",
"string_template": {
"template": exception_message_template_str,
"params": {
"expectation_type": result.expectation_config.expectation_type,
"exception_message": result.exception_info[
"exception_message"
],
},
"tag": "strong",
"styling": {
"classes": ["text-danger"],
"params": {
"exception_message": {"tag": "code"},
"expectation_type": {
"classes": ["badge", "badge-danger", "mb-2"]
},
},
},
},
}
)

exception_traceback_collapse = CollapseContent(
**{
"collapse_toggle_link": "Show exception traceback...",
"collapse": [
RenderedStringTemplateContent(
**{
"content_block_type": "string_template",
"string_template": {
"template": result.exception_info[
"exception_traceback"
],
"tag": "code",
},
}
)
],
}
)

return [exception_message, exception_traceback_collapse]

if success or not result_dict.get("unexpected_count"):
return []
else:
unexpected_count = num_to_str(
result_dict["unexpected_count"], use_locale=True, precision=20
)
unexpected_percent = (
num_to_str(result_dict["unexpected_percent"], precision=4) + "%"
)
element_count = num_to_str(
result_dict["element_count"], use_locale=True, precision=20
)

template_str = (
"\n\n$unexpected_count unexpected values found. "
"$unexpected_percent of $element_count total rows."
)

return [
RenderedStringTemplateContent(
**{
"content_block_type": "string_template",
"string_template": {
"template": template_str,
"params": {
"unexpected_count": unexpected_count,
"unexpected_percent": unexpected_percent,
"element_count": element_count,
},
"tag": "strong",
"styling": {"classes": ["text-danger"]},
},
}
)
]
  1. Lastly, test that your renderers are providing the desired output by building your Data Docs site.

    Use the following CLI command: great_expectations docs build.