Google Professional Cloud DevOps Engineer Exam Practice Questions (P. 3)
- Full Access (196 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #11
You support the backend of a mobile phone game that runs on a Google Kubernetes Engine (GKE) cluster. The application is serving HTTP requests from users.
You need to implement a solution that will reduce the network cost. What should you do?
You need to implement a solution that will reduce the network cost. What should you do?
- AConfigure the VPC as a Shared VPC Host project.
- BConfigure your network services on the Standard Tier.Most Voted
- CConfigure your Kubernetes cluster as a Private Cluster.
- DConfigure a Google Cloud HTTP Load Balancer as Ingress.
Correct Answer:
C
Reference:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
C
Reference:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
send
light_mode
delete
Question #12
You encountered a major service outage that affected all users of the service for multiple hours. After several hours of incident management, the service returned to normal, and user access was restored. You need to provide an incident summary to relevant stakeholders following the Site Reliability Engineering recommended practices. What should you do first?
- ACall individual stakeholders to explain what happened.
- BDevelop a post-mortem to be distributed to stakeholders.Most Voted
- CSend the Incident State Document to all the stakeholders.
- DRequire the engineer responsible to write an apology email to all stakeholders.
Correct Answer:
A
A

The first step in addressing a major service outage, according to Site Reliability Engineering practices, should be developing a comprehensive post-mortem report. This document not only details the incident’s impact and root causes but also outlines preventive measures to avoid future occurrences. Direct calls might initially sound proactive, but they lack the structured and detailed analysis provided by a written post-mortem that ensures all stakeholders are thoroughly informed and aligned on the incident's details and resolution strategies.
send
light_mode
delete
Question #13
You are performing a semi-annual capacity planning exercise for your flagship service. You expect a service user growth rate of 10% month-over-month over the next six months. Your service is fully containerized and runs on Google Cloud Platform (GCP), using a Google Kubernetes Engine (GKE) Standard regional cluster on three zones with cluster autoscaler enabled. You currently consume about 30% of your total deployed CPU capacity, and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth or as a result of zone failure, while avoiding unnecessary costs. How should you prepare to handle the predicted growth?
- AVerify the maximum node pool size, enable a horizontal pod autoscaler, and then perform a load test to verify your expected resource needs.Most Voted
- BBecause you are deployed on GKE and are using a cluster autoscaler, your GKE cluster will scale automatically, regardless of growth rate.
- CBecause you are at only 30% utilization, you have significant headroom and you won't need to add any additional capacity for this rate of growth.
- DProactively add 60% more node capacity to account for six months of 10% growth rate, and then perform a load test to make sure you have enough capacity.
Correct Answer:
B
B

The correct approach, as outlined by some user insights, is to employ a two-pronged strategy involving both verifying the maximum limits of the organizational aspects and utilizing dynamic scaling tools provided by GKE. Firstly, ensure the maximum node pool size aligns with projected growth, preventing bottlenecks as demand increases. Secondly, enabling and configuring a Horizontal Pod Autoscaler (HPA) is crucial. The HPA dynamically adjusts the number of pods in a deployment based on real-time usage metrics, thus efficiently managing the computational resources in response to varying loads. Finally, implementing a thorough load test will validate the resilience and capacity of the system under expected conditions, ensuring that user experience remains unaffected during scaling operations and potential zonal failures.
send
light_mode
delete
Question #14
Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?
- AUse Cloud Build to trigger a Spinnaker pipeline.
- BUse Cloud Pub/Sub to trigger a Spinnaker pipeline.Most Voted
- CUse a custom builder in Cloud Build to trigger Jenkins pipeline.
- DUse Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).
Correct Answer:
D
D

For automating deployments when your application images are updated in Google Container Registry, Cloud Pub/Sub is the ideal choice. It can efficiently trigger a deployment tool like Spinnaker by reacting to notifications from GCR about updated images. This integration minimizes development effort while leveraging native Google Cloud tools for a streamlined process. The use of Cloud Pub/Sub in this context supports real-world, platform-native continuous deployment practices optimized for Google Cloud environments.
send
light_mode
delete
Question #15
Your product is currently deployed in three Google Cloud Platform (GCP) zones with your users divided between the zones. You can fail over from one zone to another, but it causes a 10-minute service disruption for the affected users. You typically experience a database failure once per quarter and can detect it within five minutes. You are cataloging the reliability risks of a new real-time chat feature for your product. You catalog the following information for each risk:
* Mean Time to Detect (MTTD) in minutes
* Mean Time to Repair (MTTR) in minutes
* Mean Time Between Failure (MTBF) in days
* User Impact Percentage
The chat feature requires a new database system that takes twice as long to successfully fail over between zones. You want to account for the risk of the new database failing in one zone. What would be the values for the risk of database failover with the new system?
* Mean Time to Detect (MTTD) in minutes
* Mean Time to Repair (MTTR) in minutes
* Mean Time Between Failure (MTBF) in days
* User Impact Percentage
The chat feature requires a new database system that takes twice as long to successfully fail over between zones. You want to account for the risk of the new database failing in one zone. What would be the values for the risk of database failover with the new system?
- AMTTD: 5 MTTR: 10 MTBF: 90 Impact: 33%
- BMTTD: 5 MTTR: 20 MTBF: 90 Impact: 33%Most Voted
- CMTTD: 5 MTTR: 10 MTBF: 90 Impact: 50%
- DMTTD: 5 MTTR: 20 MTBF: 90 Impact: 50%
Correct Answer:
C
C

The calculation of MTTR as 20 minutes is accurate since the new database system's failover time is double that of the old system, hence changing from 10 minutes to 20 minutes. However, there seems to be a discrepancy in the User Impact Percentage. Given that users are evenly distributed across three zones, the impact should logically be about 33% if only one zone fails. It appears there's a potential error in the given correct answer. Thus, both B and D could be considered depending on whether there is additional unmentioned information affecting user impact in this specific scenario.
send
light_mode
delete
All Pages