1
0
Fork 0
mirror of https://github.com/tldr-pages/tldr.git synced 2025-04-29 23:24:55 +02:00
tldr/pages/common/bq.md
Vítor Henrique 278384d3cf
pages*: use " " instead of "=" to separate the options from their arguments (#11952)
* pages*: remove = from GNU-style long options

* 7z: readd needed =

* 7za: add missing =

* 7zr: fix argument and option separation

* astyle: readd missing =

* aws-ecr: readd missing =

* az-devops: readd missing =

* csslint: readd missing =

* cs-fetch: readd missing =

* bzgrep: readd missing =

* chromium: readd missing =

* docker-commit: remove unnecessary =

* docker-system: remove unnecessary =

* docker-ps: remove unnecessary =

* dockerd: remove unnecessary =

* 7za: readd missing =

* 7zr: readd missing =

* chromium.de: readd missing =

* chromium.de: readd missing =

* bundletool: remove additional space

Co-authored-by: Nicolas Kosinski <nicokosi@users.noreply.github.com>

---------

Co-authored-by: Sebastiaan Speck <12570668+sebastiaanspeck@users.noreply.github.com>
Co-authored-by: Nicolas Kosinski <nicokosi@users.noreply.github.com>
2024-04-18 15:38:25 -03:00

32 lines
1.2 KiB
Markdown

# bq
> A Python-based tool for BigQuery, Google Cloud's fully managed and completely serverless enterprise data warehouse.
> More information: <https://cloud.google.com/bigquery/docs/reference/bq-cli-reference>.
- Run query against a BigQuery table using standard SQL, add `--dry_run` flag to estimate the number of bytes read by the query:
`bq query --nouse_legacy_sql 'SELECT COUNT(*) FROM {{DATASET_NAME}}.{{TABLE_NAME}}'`
- Run a parameterized query:
`bq query --use_legacy_sql=false --parameter='ts_value:TIMESTAMP:2016-12-07 08:00:00' 'SELECT TIMESTAMP_ADD(@ts_value, INTERVAL 1 HOUR)'`
- Create a new dataset or table in the US location:
`bq mk --location=US {{dataset_name}}.{{table_name}}`
- List all datasets in a project:
`bq ls --filter labels.{{key}}:{{value}} --max_results {{integer}} --format=prettyjson --project_id {{project_id}}`
- Batch load data from a specific file in formats such as CSV, JSON, Parquet, and Avro to a table:
`bq load --location {{location}} --source_format {{CSV|JSON|PARQUET|AVRO}} {{dataset}}.{{table}} {{path_to_source}}`
- Copy one table to another:
`bq cp {{dataset}}.{{OLD_TABLE}} {{dataset}}.{{new_table}}`
- Display help:
`bq help`