Class: ElasticGraph::GraphQL::QueryExecutor
- Inherits:
-
Object
- Object
- ElasticGraph::GraphQL::QueryExecutor
- Defined in:
- lib/elastic_graph/graphql/query_executor.rb
Overview
Responsible for executing queries.
Instance Attribute Summary collapse
-
#schema ⇒ Object
readonly
Returns the value of attribute schema.
Instance Method Summary collapse
-
#execute(query_string, client: Client::ANONYMOUS, variables: {}, timeout_in_ms: nil, operation_name: nil, context: {}, start_time_in_ms: @monotonic_clock.now_in_ms) ⇒ Object
Executes the given ‘query_string` using the provided `variables`.
-
#initialize(schema:, monotonic_clock:, logger:, slow_query_threshold_ms:) ⇒ QueryExecutor
constructor
A new instance of QueryExecutor.
Constructor Details
#initialize(schema:, monotonic_clock:, logger:, slow_query_threshold_ms:) ⇒ QueryExecutor
Returns a new instance of QueryExecutor.
20 21 22 23 24 25 |
# File 'lib/elastic_graph/graphql/query_executor.rb', line 20 def initialize(schema:, monotonic_clock:, logger:, slow_query_threshold_ms:) @schema = schema @monotonic_clock = monotonic_clock @logger = logger @slow_query_threshold_ms = slow_query_threshold_ms end |
Instance Attribute Details
#schema ⇒ Object (readonly)
Returns the value of attribute schema.
18 19 20 |
# File 'lib/elastic_graph/graphql/query_executor.rb', line 18 def schema @schema end |
Instance Method Details
#execute(query_string, client: Client::ANONYMOUS, variables: {}, timeout_in_ms: nil, operation_name: nil, context: {}, start_time_in_ms: @monotonic_clock.now_in_ms) ⇒ Object
Executes the given ‘query_string` using the provided `variables`.
‘timeout_in_ms` can be provided to limit how long the query runs for. If the timeout is exceeded, `Errors::RequestExceededDeadlineError` will be raised. Note that `timeout_in_ms` does not provide an absolute guarantee that the query will take no longer than the provided value; it is only used to halt datastore queries. In process computation can make the total query time exceeded the specified timeout.
‘context` is merged into the context hash passed to the resolvers.
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'lib/elastic_graph/graphql/query_executor.rb', line 36 def execute( query_string, client: Client::ANONYMOUS, variables: {}, timeout_in_ms: nil, operation_name: nil, context: {}, start_time_in_ms: @monotonic_clock.now_in_ms ) # Before executing the query, prune any null-valued variable fields. This means we # treat `foo: null` the same as if `foo` was unmentioned. With certain clients (e.g. # code-gen'd clients in a statically typed language), it is non-trivial to avoid # mentioning variable fields they aren't using. It makes it easier to evolve the # schema if we ignore null-valued fields rather than potentially returning an error # due to a null-valued field referencing an undefined schema element. variables = ElasticGraph::Support::HashUtil.recursively_prune_nils_from(variables) query_tracker = QueryDetailsTracker.empty query, result = build_and_execute_query( query_string: query_string, variables: variables, operation_name: operation_name, client: client, context: context.merge({ monotonic_clock_deadline: timeout_in_ms&.+(start_time_in_ms), elastic_graph_query_tracker: query_tracker, elastic_graph_client: client }.compact) ) unless result.to_h.fetch("errors", []).empty? @logger.error <<~EOS Query #{query.selected_operation_name}[1] for client #{client.description} resulted in errors[2]. [1] #{full_description_of(query)} [2] #{::JSON.pretty_generate(result.to_h.fetch("errors"))} EOS end duration = @monotonic_clock.now_in_ms - start_time_in_ms # Note: I also wanted to log the sanitized query if `result` has `errors`, but `GraphQL::Query#sanitized_query` # returns `nil` on an invalid query, and I don't want to risk leaking PII by logging the raw query string, so # we don't log any form of the query in that case. if duration > @slow_query_threshold_ms @logger.warn "Query #{query.selected_operation_name} for client #{client.description} with shard routing values " \ "#{query_tracker.shard_routing_values.sort.inspect} and search index expressions #{query_tracker.search_index_expressions.sort.inspect} took longer " \ "(#{duration} ms) than the configured slow query threshold (#{@slow_query_threshold_ms} ms). " \ "Sanitized query:\n\n#{query.sanitized_query_string}" end unless client == Client::ELASTICGRAPH_INTERNAL @logger.info({ "message_type" => "ElasticGraphQueryExecutorQueryDuration", "client" => client.name, "query_fingerprint" => query.fingerprint, "query_name" => query.selected_operation_name, "duration_ms" => duration, # How long the datastore queries took according to what the datastore itself reported. "datastore_server_duration_ms" => query_tracker.datastore_query_server_duration_ms, # An estimate for how much overhead ElasticGraph added on top of how long the datastore took. # This is based on the duration, excluding how long the datastore calls took from the client side # (e.g. accounting for network latency, serialization time, etc) "elasticgraph_overhead_ms" => duration - query_tracker.datastore_query_client_duration_ms, # An estimate for the time spent on transport (network latency, JSON serialization, etc). "datastore_request_transport_duration_ms" => query_tracker.datastore_request_transport_duration_ms, # How many datastore shards were queried, in total. This is a measure of how much load the query caused on the datastore. "queried_shard_count" => query_tracker.queried_shard_count, # According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html#metric-filters-extract-json, # > Value nodes can be strings or numbers...If a property selector points to an array or object, the metric filter won't match the log format. # So, to allow flexibility to deal with cloud watch metric filters, we coerce these values to a string here. "unique_shard_routing_values" => query_tracker.shard_routing_values.sort.join(", "), # We also include the count of shard routing values, to make it easier to search logs # for the case of no shard routing values. "unique_shard_routing_value_count" => query_tracker.shard_routing_values.count, "unique_search_index_expressions" => query_tracker.search_index_expressions.sort.join(", "), # Indicates how many requests we sent to the datastore to satisfy the GraphQL query. "datastore_request_count" => query_tracker.query_counts_per_datastore_request.size, # Indicates how many individual datastore queries there were. One datastore request # can contain many queries (since we use `msearch`), so these counts can be different. "datastore_query_count" => query_tracker.query_counts_per_datastore_request.sum, "over_slow_threshold" => (duration > @slow_query_threshold_ms).to_s, "slo_result" => slo_result_for(query, duration) }.merge(query_tracker.extension_data)) end result end |