Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Practice Questions (P. 4)
- Full Access (363 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #31
Which logging solution will support these requirements?
- AEnable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.Most Voted
- BEnable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon EventBridge events that invoke Lambda.
- CEnable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
- DEnable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #32
A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.
Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)
- ADownload the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.
- BInstall the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.Most Voted
- CUse Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.
- DActivate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.Most Voted
- EActivate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.
- FCreate an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.Most Voted
BDE

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #33
How can the deployments of the operating system and application patches be automated using a default and custom repository?
- AUse AWS Systems Manager to create a new patch baseline including the custom repository. Run the AWS-RunPatchBaseline document using the run command to verify and install patches.Most Voted
- BUse AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports.
- CUse yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository.
- DUse AWS Systems Manager to create a new patch baseline including the corporate repository. Run the AWS-AmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches.
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #34
Which strategy will meet these requirements?
- AAdd a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create a runtime environment and build commands in the buildspec file to invoke test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
- BAdd a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to invoke an AWS Lambda function that will run the test scripts. If errors are found, use the aws deploy stop-deployment command to stop the deployment.
- CAdd a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to initiate rollback.Most Voted
- DAdd a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test scripts. If errors are found, use the aws deploy stop-deployment CLI command to stop the deployment.
C

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #35
Which solution ensures that all the updated third-party files are available in the morning?
- AConfigure a nightly Amazon EventBridge event to invoke an AWS Lambda function to run the RefreshCache command for Storage Gateway.Most Voted
- BInstruct the third party to put data into the S3 bucket using AWS Transfer for SFTP.
- CModify Storage Gateway to run in volume gateway mode.
- DUse S3 Same-Region Replication to replicate any changes made directly in the S3 bucket to Storage Gateway.
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #36
Which combination of actions should be performed to enable this replication? (Choose three.)
- ACreate a replication IAM role in the source accountMost Voted
- BCreate a replication I AM role in the target account.
- CAdd statements to the source bucket policy allowing the replication IAM role to replicate objects.
- DAdd statements to the target bucket policy allowing the replication IAM role to replicate objects.Most Voted
- ECreate a replication rule in the source bucket to enable the replication.Most Voted
- FCreate a replication rule in the target bucket to enable the replication.
ADE

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #37
Which combination of access changes will meet these requirements? (Choose three.)
- ACreate a trust relationship that allows users in the member accounts to assume the management account IAM role.
- BCreate a trust relationship that allows users in the management account to assume the IAM roles of the member accounts.Most Voted
- CCreate an IAM role in each member account that has access to the AmazonEC2ReadOnlyAccess managed policy.Most Voted
- DCreate an I AM role in each member account to allow the sts:AssumeRole action against the management account IAM role's ARN.
- ECreate an I AM role in the management account that allows the sts:AssumeRole action against the member account IAM role's ARN.Most Voted
- FCreate an IAM role in the management account that has access to the AmazonEC2ReadOnlyAccess managed policy.
BCE

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #38
Because of inconsistencies in the data that the satellites produce, the application is occasionally unable to transform the data. In these cases, the messages remain in the SQS queue. A DevOps engineer must develop a solution that retains the failed messages and makes them available to scientists for review and future processing.
Which solution will meet these requirements?
- AConfigure AWS Lambda to poll the SQS queue and invoke a Lambda function to check whether the queue messages are valid. If validation fails, send a copy of the data that is not valid to an Amazon S3 bucket so that the scientists can review and correct the data. When the data is corrected, amend the message in the SQS queue by using a replay Lambda function with the corrected data.
- BConvert the SQS standard queue to an SQS FIFO queue. Configure AWS Lambda to poll the SQS queue every 10 minutes by using an Amazon EventBridge schedule. Invoke the Lambda function to identify any messages with a SentTimestamp value that is older than 5 minutes, push the data to the same location as the application's output location, and remove the messages from the queue.
- CCreate an SQS dead-letter queue. Modify the existing queue by including a redrive policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to review the data that is not valid. Reprocess this data at a later time.Most Voted
- DConfigure API Gateway to send messages to different SQS virtual queues that are named for each of the satellites. Update the application to use a new virtual queue for any data that it cannot transform, and send the message to the new virtual queue. Instruct the scientists to use the virtual queue to review the data that is not valid. Reprocess this data at a later time.
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #39
Which solution ensures resources are deployed in accordance with company policy?
- ACreate AWS Trusted Advisor checks to find and remediate unapproved CloudFormation StackSets.
- BCreate a Cloud Formation drift detection operation to find and remediate unapproved CloudFormation StackSets.
- CCreate CloudFormation StackSets with approved CloudFormation templates.
- DCreate AWS Service Catalog products with approved CloudFormation templates.Most Voted
D

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #40
Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.)
- AAdd the NAT instance to an EC2 Auto Scaling group that spans multiple Availability Zones. Update the route tables.
- BCreate additional EC2 instances spanning multiple Availability Zones. Add an Application Load Balancer to split the load between them.Most Voted
- CConfigure an Application Load Balancer in front of the EC2 instance. Configure Amazon CloudWatch alarms to recover the EC2 instance upon host failure.
- DReplace the NAT instance with a NAT gateway in each Availability Zone. Update the route tables.Most Voted
- EReplace the NAT instance with a NAT gateway that spans multiple Availability Zones. Update the route tables.
BD

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
All Pages