Amazon AWS DevOps Engineer Professional Exam Practice Questions (P. 2)
- Full Access (208 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #6
A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications.
The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.
What should a DevOps engineer do to meet these requirements?
The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure.
What should a DevOps engineer do to meet these requirements?
- ACreate one AWS CodeCommit repository for all applications. Put each application's code in different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
- BCreate one AWS CodeCommit repository for each of the applications Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
- CCreate one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time to create one AMI for each server. Use AWS CloudFormation StackSets to automatically provision and decommission Amazon EC2 fleets by using these AMIs.
- DCreate one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.Most Voted
Correct Answer:
B
Reference:
https://towardsdatascience.com/ci-cd-logical-and-practical-approach-to-build-four-step-pipeline-on-aws-3f54183068ec
B
Reference:
https://towardsdatascience.com/ci-cd-logical-and-practical-approach-to-build-four-step-pipeline-on-aws-3f54183068ec
send
light_mode
delete
Question #7
A DevOps engineer is developing an application for a company. The application needs to persist files to Amazon S3. The application needs to upload files with different security classifications that the company defines. These classifications include confidential, private, and public. Files that have a confidential classification must not be viewable by anyone other than the user who uploaded them. The application uses the IAM role of the user to call the S3 API operations.
The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user's ID to each confidential object that is uploaded to Amazon S3.
Which set of additional steps must the DevOps engineer take to meet the company's requirements?
The DevOps engineer has modified the application to add a DataClassification tag with the value of confidential and an Owner tag with the uploading user's ID to each confidential object that is uploaded to Amazon S3.
Which set of additional steps must the DevOps engineer take to meet the company's requirements?
- AModify the S3 bucket's ACL to grant bucket-owner-read access to the uploading user's IAM role. Create an IAM policy that grants s3:GetObject operations on the S3 bucket when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Attach the policy to the IAM roles for users who require access to the S3 bucket.
- BModify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.Most Voted
- CModify the S3 bucket policy to allow the s3:GetObject action when aws:ResourceTag/DataClassification equals confidential, and aws:RequesttTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
- DModify the S3 bucket's ACL to grant authenticated-read access when aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner equals ${aws:userid}. Create an IAM policy that grants s3:GetObject operations on the S3 bucket. Attach the policy to the IAM roles for users who require access to the S3 bucket.
Correct Answer:
B
B

To ensure only the uploading user can access confidential files, the DevOps engineer should adjust the S3 bucket policy to restrict the s3:GetObject action based on specific tags (aws:ResourceTag/DataClassification equals confidential, and s3:ExistingObjectTag/Owner matches ${aws:userid}). This setup secures the access as required without relying on ACLs, which cannot be conditionally applied to specific scenarios like this. Additionally, attaching a standard IAM policy granting s3:GetObject rights to user roles is essential for general operational access, complementing the customized bucket policy. This dual layer ensures both functional accessibility and stringent confidentiality.
send
light_mode
delete
Question #8
A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.
How should the DevOps Engineer overcome this?
A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps
Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.
How should the DevOps Engineer overcome this?
- AAdd a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda functionMost Voted
- BAdd an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
- CAdd a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
- DAdd a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
Correct Answer:
A
A

To tackle the issue of intermittent failures post-deployment due to pending database updates, adding a BeforeAllowTraffic hook is the most practical measure. This hook, as specified in the AppSpec file for a Lambda deployment, ensures that the new version of the function only starts handling traffic after verifying that all necessary database modifications are fully applied. This effectively prevents the function from attempting to execute with incomplete or outdated database information, thereby mitigating failures. This solution targets the root cause by incorporating a pre-traffic check, aligning perfectly with the operational needs described.
send
light_mode
delete
Question #9
A software company wants to automate the build process for a project where the code is stored in GitHub. When the repository is updated, source code should be compiled, tested, and pushed to Amazon S3.
Which combination of steps would address these requirements? (Choose three.)
Which combination of steps would address these requirements? (Choose three.)
- AAdd a buildspec.yml file to the source code with build instructions.Most Voted
- BConfigure a GitHub webhook to trigger a build every time a code change is pushed to the repository.Most Voted
- CCreate an AWS CodeBuild project with GitHub as the source repository.Most Voted
- DCreate an AWS CodeDeploy application with the Amazon EC2/On-Premises compute platform.
- ECreate an AWS OpsWorks deployment with the install dependencies command.
- FProvision an Amazon EC2 instance to perform the build.
Correct Answer:
ABC
ABC

Yes, the correct answer with choices A, B, and C is spot-on for automating a build process sourced from GitHub. When changes are pushed to the GitHub repo, the GitHub webhook (B) initiates the process. The buildspec.yml file (A) includes necessary build instructions, ensuring that codes are accurately compiled and tested. Using AWS CodeBuild (C), which integrates seamlessly with GitHub as the source, facilitates an automated sequence where after successful builds, artifacts can be stored in Amazon S3, thereby satisfying the requirement of pushing the build outputs to an S3 bucket effectively. This streamlined integration highlights an efficient approach to continuous integration workflows.
send
light_mode
delete
Question #10
An online retail company based in the United States plans to expand its operations to Europe and Asia in the next six months. Its product currently runs on
Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.
When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.
How should the company meet these requirements with the LEAST amount of application changes?
Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. All data is stored in an Amazon Aurora database instance.
When the product is deployed in multiple regions, the company wants a single product catalog across all regions, but for compliance purposes, its customer information and purchases must be kept in each region.
How should the company meet these requirements with the LEAST amount of application changes?
- AUse Amazon Redshift for the product catalog and Amazon DynamoDB tables for the customer information and purchases.
- BUse Amazon DynamoDB global tables for the product catalog and regional tables for the customer information and purchases.
- CUse Aurora with read replicas for the product catalog and additional local Aurora instances in each region for the customer information and purchases.Most Voted
- DUse Aurora for the product catalog and Amazon DynamoDB global tables for the customer information and purchases.
Correct Answer:
C
C

The most straightforward approach for the online retail company expanding its operations while ensuring compliance and minimal application changes is to utilize Aurora with read replicas for the global product catalog. This setup allows maintaining a consistent and unified view of the catalog across all regions. For localized storage of customer data and transactions, deploying additional Aurora instances in each region ensures compliance with regional data laws without necessitating significant changes to the existing database management or application infrastructure. This strategy leverages the company's existing use of Aurora, reducing the need for new training or major adjustments in application interactions with the database.
send
light_mode
delete
All Pages