Google Professional Cloud Database Engineer Exam Practice Questions (P. 3)
- Full Access (132 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #11
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical. The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner. Which roles should you assign?
- Aroles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupWriter for Application CMost Voted - Broles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupAdmin for Application C - Croles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner databaseReader for Application C - Droles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner.backupWriter for Application C
Correct Answer:
B
B

Assigning the roles/spanner.databaseAdmin role to Person A aligns perfectly because this role manages database structure and security, keeping the principle of PoLP intact. Person B requires access to read data for reports but should not alter any data, making roles/spanner.databaseReader appropriate. Although roles/spanner.backupWriter allows the creation of backups as necessary for Application C, the roles/spanner.backupAdmin role provides comprehensive management capabilities such as creating, updating, and deleting backups, enhancing adherence to strict security and operational guidelines within heavily regulated environments. Thus, the selection of roles/spanner.backupAdmin for backup tasks supports broader management and compliance strategies.
send
light_mode
delete
Question #12
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low. What should you do?
- AManually scale down the number of nodes after the peak period has passed.
- BUse interleaving to co-locate parent and child rows.
- CUse the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
- DUse granular instance sizing in Cloud Spanner and Autoscaler.Most Voted
Correct Answer:
C
C

To maintain high performance while minimizing costs in your Cloud Spanner deployment, it is indeed effective to utilize the Autoscaler tool. This tool dynamically adjusts the number of nodes based on actual usage demands, meticulously aligning resources with current needs without manual intervention. While the query optimizer helps with efficiently executing SQL queries, the alignment of resource provisioning will play a more pivotal role in managing expenses effectively in your scenario. Hence, leveraging the Autoscaler enables both scalability and cost-efficiency, crucial for gaming applications where workload can be variable and unpredictable.
send
light_mode
delete
Question #13
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?
- AMaintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone europe-west1-d
cluster-c in zone asia-east1-b - BMaintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central1-b
cluster-c in zone us-east1-a - CMaintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone australia-southeast1-a
cluster-c in zone europe-west1-d
cluster-d in zone asia-east1-b - DMaintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central2-a
cluster-c in zone asia-northeast1-b
cluster-d in zone asia-east1-bMost Voted
Correct Answer:
D
D

For ensuring high availability and supporting regional expansion from the US to APAC, option D is the best pick as it distributes clusters optimally between the regions. Specifically, maintaining two clusters in both the US and APAC not only increases regional coverage but also positions for efficient failover capabilities and load management. This configuration aligns well with Google's recommended practices concerning global distribution and regional failover. Thus, ensuring the system remains resilient, available, and performs efficiently across both targeted markets.
send
light_mode
delete
Question #14
Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second. What should you do?
- AWrite your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.Most Voted
- BDeploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
- CUse Memorystore to handle your low-latency requirements and for real-time analytics.
- DStream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
Correct Answer:
B
B

The optimal solution for handling user clickstream data, given the need for low-latency with high scalability, indeed involves leveraging Bigtable. This configuration notably excels in scenarios requiring rapid reads and writes globally, making it especially suitable for clickstream data like that of an ecommerce website. Bigtable, combined with Dataproc and Apache HBase libraries, offers potent real-time analytics capabilities. This setup not only manages the data volume efficiently but also seamlessly scales to accommodate millions of read and write requests per second, aligning perfectly with the requirements for analyzing customer traffic patterns effectively.
send
light_mode
delete
Question #15
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hotspots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation. What should you do? (Choose two.)
- AUse an auto-incrementing value as the primary key.
- BNormalize the data model.
- CPromote low-cardinality attributes in multi-attribute primary keys.
- DPromote high-cardinality attributes in multi-attribute primary keys.Most Voted
- EUse bit-reverse sequential value as the primary key.Most Voted
Correct Answer:
AD
AD

Promoting high-cardinality attributes in the primary keys (Option D) is a recommended practice because it helps distribute database load more evenly, reducing the risk of hotspots. Additionally, Option E, using a bit-reverse sequential value as the primary key, also mitigates the risk of hotspots by ensuring that newly inserted rows are spread across different servers. It's notable that employing auto-incrementing primary keys (Option A) is typically discouraged as it can increase the chances of hotspotting, contradicting the selected correct answer. This highlights an inconsistency with common guidelines on optimizing database performance with Cloud Spanner.
send
light_mode
delete
All Pages