You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* [Optional] The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.
1148
+
* [Optional] The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro. Not applicable when extracting models.
1144
1149
*/
1145
1150
compression?: string;
1146
1151
/**
1147
-
* [Optional] The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO. The default value is CSV. Tables with nested or repeated fields cannot be exported as CSV.
1152
+
* [Optional] The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON or AVRO for tables and ML_TF_SAVED_MODEL or ML_XGBOOST_BOOSTER for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is ML_TF_SAVED_MODEL.
* [Optional] If destinationFormat is set to "AVRO", this flag indicates whether to enable extracting applicable column types (such as TIMESTAMP) to their corresponding AVRO logical types (timestamp-micros), instead of only using their raw types (avro-long).
1180
+
* [Optional] If destinationFormat is set to "AVRO", this flag indicates whether to enable extracting applicable column types (such as TIMESTAMP) to their corresponding AVRO logical types (timestamp-micros), instead of only using their raw types (avro-long). Not applicable when extracting models.
* [Beta] Clustering specification for the destination table. Must be specified with time-based partitioning, data in the table will be first partitioned and subsequently clustered.
1302
1307
*/
1303
1308
clustering?: IClustering;
1309
+
/**
1310
+
* Connection properties.
1311
+
*/
1312
+
connectionProperties?: Array<any>;
1304
1313
/**
1305
1314
* [Optional] Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a 'notFound' error is returned in the job result. The default value is CREATE_IF_NEEDED. Creation, truncation and append actions occur as one atomic update upon job completion.
* [Output-only] Name of the primary reservation assigned to this job. Note that this could be different than reservations reported in the reservation usage field if parent reservations were used to execute this job.
1536
1545
*/
1537
1546
reservation_id?: string;
1547
+
/**
1548
+
* [Output-only] [Preview] Statistics for row-level security. Present only for query and extract jobs.
* [Output-only] Whether the query result was fetched from the query cache.
1563
1576
*/
1564
1577
cacheHit?: boolean;
1578
+
/**
1579
+
* [Output-only] [Preview] The number of row access policies affected by a DDL statement. Present only for DROP ALL ROW ACCESS POLICIES queries.
1580
+
*/
1581
+
ddlAffectedRowAccessPolicyCount?: string;
1565
1582
/**
1566
1583
* The DDL operation performed, possibly dependent on the pre-existence of the DDL target. Possible values (new values might be added in the future): "CREATE": The query created the DDL target. "SKIP": No-op. Example cases: the query is CREATE TABLE IF NOT EXISTS while the table already exists, or the query is DROP TABLE IF EXISTS while the table does not exist. "REPLACE": The query replaced the DDL target. Example case: the query is CREATE OR REPLACE TABLE, and the table already exists. "DROP": The query deleted the DDL target.
* [Optional] Specifies the default datasetId and projectId to assume for any unqualified table names in the query. If not set, all table names in the query string must be qualified in the format 'datasetId.tableId'.
* [Required] The ID of the dataset containing this row access policy.
2368
+
*/
2369
+
datasetId?: string;
2370
+
/**
2371
+
* [Required] The ID of the row access policy. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
2372
+
*/
2373
+
policyId?: string;
2374
+
/**
2375
+
* [Required] The ID of the project containing this row access policy.
2376
+
*/
2377
+
projectId?: string;
2378
+
/**
2379
+
* [Required] The ID of the table containing this row access policy.
2380
+
*/
2381
+
tableId?: string;
2382
+
};
2383
+
2384
+
typeIRowLevelSecurityStatistics={
2385
+
/**
2386
+
* [Output-only] [Preview] Whether any accessed data was protected by row access policies.
"log": "99820243d348191bc9c634f2b48ddf65096285ed\nfix: update template files for Node.js libraries (#463)\n\n\n3cbe6bcd5623139ac9834c43818424ddca5430cb\nfix(ruby): remove dead troubleshooting link from generated auth guide (#462)\n\n\na003d8655d3ebec2bbbd5fc3898e91e152265c67\ndocs: remove \"install stable\" instructions (#461)\n\nThe package hasn't been released to PyPI in some time\nf5e8c88d9870d8aa4eb43fa0b39f07e02bfbe4df\nchore(python): add license headers to config files; make small tweaks to templates (#458)\n\n\ne36822bfa0acb355502dab391b8ef9c4f30208d8\nchore(java): treat samples shared configuration dependency update as chore (#457)\n\n\n1b4cc80a7aaf164f6241937dd87f3bd1f4149e0c\nfix: do not run node 8 CI (#456)\n\n\nee4330a0e5f4b93978e8683fbda8e6d4148326b7\nchore(java_templates): mark version bumps of current library as a chore (#452)\n\nWith the samples/install-without-bom/pom.xml referencing the latest released library, we want to mark updates of this version as a chore for renovate bot.\na0d3133a5d45544a66345059eebf76933265c099\nfix(java): run mvn install with retry (#453)\n\n* fix(java): run mvn install with retry\n\n* fix invocation of command\n6a17abc7652e2fe563e1288c6e8c23fc260dda97\ndocs: document the release schedule we follow (#454)\n\n\n"
0 commit comments