Amazon AWS DevOps Engineer Professional Exam Practice Questions (P. 3)
- Full Access (208 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #11
A company wants to ensure that their EC2 instances are secure. They want to be notified if any new vulnerabilities are discovered on their instances, and they also want an audit trail of all login activities on the instances.
Which solution will meet these requirements?
Which solution will meet these requirements?
- AUse AWS Systems Manager to detect vulnerabilities on the EC2 instances. Install the Amazon Kinesis Agent to capture system logs and deliver them to Amazon S3.
- BUse AWS Systems Manager to detect vulnerabilities on the EC2 instances. Install the Systems Manager Agent to capture system logs and view login activity in the CloudTrail console.
- CConfigure Amazon CloudWatch to detect vulnerabilities on the EC2 instances. Install the AWS Config daemon to capture system logs and view them in the AWS Config console.
- DConfigure Amazon Inspector to detect vulnerabilities on the EC2 instances. Install the Amazon CloudWatch Agent to capture system logs and record them via Amazon CloudWatch Logs.Most Voted
Correct Answer:
B
B

In this scenario, the correct setup involves AWS Systems Manager for vulnerability detection on EC2 instances. Systems Manager helps in ensuring that instances meet compliance requirements and isn't limited solely to configuration management. It’s key in checking the state of your patches which is crucial for security. Moreover, integration with CloudTrail allows for capturing and analyzing login activities and other system-level operations, providing a comprehensive audit trail. This thorough approach addresses both requirements put forward by cyber security protocols in infrastructure management, combining the benefits of patch and configuration management with monitoring and logging.
send
light_mode
delete
Question #12
A DevOps Engineer needs to back up sensitive Amazon S3 objects that are stored within an S3 bucket with a private bucket policy using the S3 cross-region replication functionality. The objects need to be copied to a target bucket in a different AWS Region and account.
Which actions should be performed to enable this replication? (Choose three.)
Which actions should be performed to enable this replication? (Choose three.)
- ACreate a replication IAM role in the source account.Most Voted
- BCreate a replication IAM role in the target account.
- CAdd statements to the source bucket policy allowing the replication IAM role to replicate objects.
- DAdd statements to the target bucket policy allowing the replication IAM role to replicate objects.Most Voted
- ECreate a replication rule in the source bucket to enable the replication.Most Voted
- FCreate a replication rule in the target bucket to enable the replication.
Correct Answer:
ADF
ADF

For successful S3 cross-region replication when targeting a different AWS region and account, follow these steps: Firstly, create an IAM role in the source account specifically for replication (A). Next, set up a replication rule directly in the source bucket to manage where and how objects are replicated (E). Finally, it’s essential to modify the bucket policy of the target bucket to permit the replication IAM role from the source account to replicate objects into it (D). These steps ensure smooth replication across different accounts and regions while maintaining appropriate access controls.
send
light_mode
delete
Question #13
A company is using Amazon EC2 for various workloads. Company policy requires that instances be managed centrally to standardize configurations. These configurations include standard logging, metrics, security assessments, and weekly patching.
How can the company meet these requirements? (Choose three.)
How can the company meet these requirements? (Choose three.)
- AUse AWS Config to ensure all EC2 instances are managed by Amazon Inspector.
- BUse AWS Config to ensure all EC2 instances are managed by AWS Systems Manager.Most Voted
- CUse AWS Systems Manager to install and manage Amazon Inspector, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances.Most Voted
- DUse Amazon Inspector to install and manage AWS Systems Manager, Systems Manager Patch Manager, and the Amazon CloudWatch agent on all instances.
- EUse AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule Systems Manager Patch Manager tasks. Use the Amazon CloudWatch agent to schedule Amazon Inspector assessment runs.
- FUse AWS Systems Manager maintenance windows with Systems Manager Run Command to schedule Systems Manager Patch Manager tasks. Use Amazon CloudWatch Events to schedule Amazon Inspector assessment runs.Most Voted
Correct Answer:
BDE
BDE

To manage AWS EC2 instances centrally and ensure compliance with company policies, leveraging AWS Systems Manager is pivotal. Using AWS Config, as noted in option B, ensures that all instances are centrally managed under AWS Systems Manager, providing a unified platform for configurations. Option C effectively uses AWS Systems Manager to manage essential services like Amazon Inspector, Patch Manager, and CloudWatch, crucial for logging, security assessments, and patching. Moreover, option E utilizes AWS Systems Manager maintenance windows for scheduling these tasks effectively, standardizing operations across all instances. Combining these services secures, patches, and monitors EC2 instances consistently, aligning with standard operational procedures.
send
light_mode
delete
Question #14
A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.
What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests, packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to ensure the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months, the DevOps Engineer has noticed the pipeline takes too long to complete.
What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
- AModify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
- BCreate a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the builds in parallel.
- CModify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same runOrder.Most Voted
- DModify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
Correct Answer:
C
C

To optimize the performance of a CI/CD pipeline that builds, tests, and deploys multiple independent Lambda functions, consider adjusting the pipeline configuration to allow for parallel execution of these functions. Specifically, by setting actions for each Lambda function to have the same runOrder in the pipeline's stage, you establish concurrent processing which drastically cuts down the overall completion time. This method leverages AWS CodePipeline's ability to manage simultaneous actions effectively, improving throughput without the need for additional compute resources or environment alterations.
send
light_mode
delete
Question #15
A company is creating a software solution that executes a specific parallel-processing mechanism. The software can scale to tens of servers in some special scenarios. This solution uses a proprietary library that is license-based, requiring that each individual server have a single, dedicated license installed. The company has 200 licenses and is planning to run 200 server nodes concurrently at most.
The company has requested the following features:
✑ A mechanism to automate the use of the licenses at scale.
✑ Creation of a dashboard to use in the future to verify which licenses are available at any moment.
What is the MOST effective way to accomplish these requirements?
The company has requested the following features:
✑ A mechanism to automate the use of the licenses at scale.
✑ Creation of a dashboard to use in the future to verify which licenses are available at any moment.
What is the MOST effective way to accomplish these requirements?
- AUpload the licenses to a private Amazon S3 bucket. Create an AWS CloudFormation template with a Mappings section for the licenses. In the template, create an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the Mappings section. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.
- BUpload the licenses to an Amazon DynamoDB table. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script, acquire an available license from the DynamoDB table. Create an Auto Scaling lifecycle hook, then use it to update the mapping after the instance is terminated.Most Voted
- CUpload the licenses to a private Amazon S3 bucket. Populate an Amazon SQS queue with the list of licenses stored in S3. Create an AWS CloudFormation template that uses an Auto Scaling group to launch the servers. In the user data script acquire an available license from SQS. Create an Auto Scaling lifecycle hook, then use it to put the license back in SQS after the instance is terminated.
- DUpload the licenses to an Amazon DynamoDB table. Create an AWS CLI script to launch the servers by using the parameter --count, with min:max instances to launch. In the user data script, acquire an available license from the DynamoDB table. Monitor each instance and, in case of failure, replace the instance, then manually update the DynamoDB table.
Correct Answer:
D
D

The optimal solution involves using Amazon DynamoDB to manage licenses dynamically as it allows for real-time tracking and updates, which is crucial for scalability up to 200 nodes. DynamoDB's ability to effectively handle these updates and queries ensures that the system can dynamically assign and reclaim licenses as instances scale up or down. This setup not only automates the process but also provides an efficient way to integrate a dashboard for real-time license tracking, catering directly to the company's requirements for automation and monitoring.
send
light_mode
delete
All Pages