Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Practice Questions (P. 1)
- Full Access (353 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
- AModify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
- BModify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.Most Voted
- CConfigure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
- DConfigure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.
Correct Answer:
B
B

To effectively gather metrics by API operation name, response code, and application version from an AWS Lambda function, the best approach involves modifying the Lambda function to log these details to a CloudWatch Logs log group. Utilizing CloudWatch Logs Insights to parse these logs allows for a dynamic, query-driven way to generate the necessary metrics. This method is advantageous as it supports flexible, real-time analysis and can cater to the evolving nature of the application where multiple versions are active. This setup helps in efficiently identifying and addressing API-specific issues across different application versions.
send
light_mode
delete
Question #2
A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold-start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?
Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?
- AConfigure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
- BConfigure reserved concurrency on the Lambda function with a concurrency value of 0.
- CConfigure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.Most Voted
- DConfigure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.
Correct Answer:
C
C

Option C, configuring provisioned concurrency on the Lambda function combined with AWS Application Auto Scaling, effectively manages cold start issues and adjusts to fluctuating traffic. This setup ensures a baseline number of ready instances due to provisioned concurrency, reducing latency consistently throughout the day. Application Auto Scaling's ability to adjust between a minimum and a maximum based on actual demand offers an economic benefit, maintaining performance during peak times and reducing costs when demand is lower. This approach provides an optimized solution for handling varying load levels while keeping response times swift.
send
light_mode
delete
Question #3
A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache Webserver. The development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application. After completion, the team will create additional deployment groups for staging and production.
The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?
The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?
- ATag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.
- BCreate a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.Most Voted
- CCreate a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
- DCreate a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.
Correct Answer:
B
B

Using the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME, as detailed in option B, directly correlates deployment specifics with corresponding log level settings, utilizing the simplicity of a script to adjust configurations pre-installation in the BeforeInstall lifecycle hook. This approach minimizes management overhead by negating the need for multiple script versions and avoids complex interdependencies with external systems or custom variables. It offers a streamlined, efficient method to differentiate environment settings directly within the deployment process, ensuring an adaptable yet straightforward workflow.
send
light_mode
delete
Question #4
A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not require backups. The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?
- ASet up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
- BSet up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.Most Voted
- CTurn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
- DTurn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
Correct Answer:
B
B

Option B is optimal because it directly targets the requirement to ensure each EBS volume has the Backup_Frequency tag. By leveraging AWS Config’s managed rule for EC2::Volume resources coupled with a remediation action, this setup guarantees that any untagged volumes are automatically tagged with 'weekly', ensuring compliance with backup policies. This approach is both efficient and precise, avoiding unnecessary complexities that might arise from broader rules or different services that don’t focus on EBS volumes exclusively.
send
light_mode
delete
Question #5
A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster is configured with a single DB instance. The application performs read and write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window. The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?
- AAdd a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.Most Voted
- BAdd a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
- CTurn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
- DTurn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations
Correct Answer:
C
C

The choice to add a reader instance to an existing Amazon Aurora cluster and update application endpoints accordingly offers a reliable approach to maintain high availability during maintenance activities. This strategy leverages Aurora's architecture by distributing the load and allowing read scaling without the need to set up Multi-AZ during the initial creation of the DB instance. It's crucial to understand that, contrary to some interpretations, enabling Multi-AZ after initial setup isn't just a matter of toggling an option but rather it involves a more involved configuration process. This option ensures minimal service interruption and is aligned with Amazon's best practices for scalability and reliability.
send
light_mode
delete
Question #6
A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)
- AIn the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.Most Voted
- BIn the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
- CIn the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
- DIn the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.Most Voted
- EIn the source account, share the unencrypted AMI with the target account.
- FIn the source account, share the encrypted AMI with the target account.Most Voted
Correct Answer:
ACD
ACD

To effectively share an encrypted AMI across accounts while complying with encryption requirements, begin by using the AWS KMS key to copy the unencrypted AMI to an encrypted format. This is crucial for maintaining security standards. Next, modify the key policy in the source account to permit the target account to create a KMS grant. This facilitates secure access for the target. Finally, establish a KMS grant in the source account that authorizes the Auto Scaling service-linked role in the target account, ensuring that the Auto Scaling group can efficiently utilize the AMI. These steps ensure both compliance and functionality in multi-account scenarios.
send
light_mode
delete
Question #7
A company uses AWS CodePipeline pipelines to automate releases of its application A typical pipeline consists of three stages build, test, and deployment. The company has been using a separate AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)
- ACreate a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.Most Voted
- BCreate a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.
- CCreate an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.
- DCreate an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.Most Voted
- ECreate an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
Correct Answer:
AD
AD

To effectively deploy your RPM package to an Auto Scaling group of EC2 instances using CodeDeploy in AWS, begin by embedding the CodeDeploy agent in a new version of your common AMI. This ensures the instances are prepared to handle deployments via CodeDeploy. Concurrently, update the IAM role associated with these EC2 instances to empower them to interact securely with CodeDeploy services, ensuring a smooth and automated deployment process. This setup complements the use of CodePipeline, allowing for a well-integrated DevOps workflow across AWS services.
send
light_mode
delete
Question #8
A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs. The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations. The company has configured AWS Config for the organization. During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)
- ADelegate AWS Firewall Manager to a security account.Most Voted
- BDelegate Amazon GuardDuty to a security account.
- CCreate an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.Most Voted
- DCreate an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
- EConfigure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
Correct Answer:
AC
AC

The best way to address the company's security requirement is by using AWS Firewall Manager, which ensures consistency across hundreds of accounts by centrally managing the security settings. To prevent future violations effectively, delegating the AWS Firewall Manager to a security account (Option A) enables specialized oversight, enhancing security enforcement across the organization. Additionally, creating a policy in AWS Firewall Manager that automatically attaches AWS WAF web ACLs to any newly created ALBs and API Gateway APIs (Option C) ensures that newly deployed resources comply with the security standards from the start, facilitating proactive compliance and reducing manual oversight.
send
light_mode
delete
Question #9
A company uses AWS Key Management Service (AWS KMS) keys and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days.
Which solution will accomplish this?
Which solution will accomplish this?
- AConfigure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
- BConfigure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.
- CDevelop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.Most Voted
- DConfigure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
Correct Answer:
C
C

AWS KMS doesn't support automatic notifications based on key age. The right move here is to develop a custom AWS Config rule to monitor the key rotation status. This rule can track the age of each key and issue a notification through Amazon SNS once any key exceeds the 90-day rotation period. This approach aligns closely with the needed compliance requirements and ensures proactive management of the KMS keys.
send
light_mode
delete
Question #10
A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request. The security team does not allow unauthenticated requests to S3 buckets for this project.
How can this issue be corrected in the MOST secure manner?
How can this issue be corrected in the MOST secure manner?
- AAdd the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
- BModify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
- CRemove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.Most Voted
- DRemove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.
Correct Answer:
C
C

Absolutely, Answer C indeed nails it! By stripping the unauthenticated access and slapping a bucket policy on there, we only let the cool kids (authorized users/services) into our S3 bucket party. But here’s the kicker – we then tweak the CodeBuild service role to shake hands with Amazon S3, making the whole process smooth and secure. Using the AWS CLI to download the script? Chef’s kiss for maintaining secure, controlled access. Spot on for a tight security protocol!
send
light_mode
delete
All Pages