Amazon AWS Certified Developer - Associate DVA-C02 Exam Practice Questions (P. 1)
- Full Access (557 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
A company is implementing an application on Amazon EC2 instances. The application needs to process incoming transactions. When the application detects a transaction that is not valid, the application must send a chat message to the company's support team. To send the message, the application needs to retrieve the access token to authenticate by using the chat API.
A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts.
Which solution will meet these requirements with the LEAST management overhead?
A developer needs to implement a solution to store the access token. The access token must be encrypted at rest and in transit. The access token must also be accessible from other AWS accounts.
Which solution will meet these requirements with the LEAST management overhead?
- AUse an AWS Systems Manager Parameter Store SecureString parameter that uses an AWS Key Management Service (AWS KMS) AWS managed key to store the access token. Add a resource-based policy to the parameter to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Parameter Store. Retrieve the token from Parameter Store with the decrypt flag enabled. Use the decrypted access token to send the message to the chat.Most Voted
- BEncrypt the access token by using an AWS Key Management Service (AWS KMS) customer managed key. Store the access token in an Amazon DynamoDB table. Update the IAM role of the EC2 instances with permissions to access DynamoDB and AWS KMS. Retrieve the token from DynamoDDecrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the message to the chat.
- CUse AWS Secrets Manager with an AWS Key Management Service (AWS KMS) customer managed key to store the access token. Add a resource-based policy to the secret to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Secrets Manager. Retrieve the token from Secrets Manager. Use the decrypted access token to send the message to the chat.
- DEncrypt the access token by using an AWS Key Management Service (AWS KMS) AWS managed key. Store the access token in an Amazon S3 bucket. Add a bucket policy to the S3 bucket to allow access from other accounts. Update the IAM role of the EC2 instances with permissions to access Amazon S3 and AWS KMS. Retrieve the token from the S3 bucket. Decrypt the token by using AWS KMS on the EC2 instances. Use the decrypted access token to send the massage to the chat.
Correct Answer:
D
D

The correct choice here is D, which involves using an Amazon S3 bucket with an AWS KMS-managed key for encryption. This approach satisfies the requirements for encryption at rest and in transit and allows for the controlled sharing of access tokens across different AWS accounts through specific S3 bucket policies. This method provides an effective balance between security and ease of management, facilitating the encryption process and access management without extensive overhead. Also, the integration between S3 and KMS is directly supported and streamlined, bypassing any complexities that could arise from other methods like handling customer-managed keys.
send
light_mode
delete
Question #2
A company is running Amazon EC2 instances in multiple AWS accounts. A developer needs to implement an application that collects all the lifecycle events of the EC2 instances. The application needs to store the lifecycle events in a single Amazon Simple Queue Service (Amazon SQS) queue in the company's main AWS account for further processing.
Which solution will meet these requirements?
Which solution will meet these requirements?
- AConfigure Amazon EC2 to deliver the EC2 instance lifecycle events from all accounts to the Amazon EventBridge event bus of the main account. Add an EventBridge rule to the event bus of the main account that matches all EC2 instance lifecycle events. Add the SQS queue as a target of the rule.
- BUse the resource policies of the SQS queue in the main account to give each account permissions to write to that SQS queue. Add to the Amazon EventBridge event bus of each account an EventBridge rule that matches all EC2 instance lifecycle events. Add the SQS queue in the main account as a target of the rule.
- CWrite an AWS Lambda function that scans through all EC2 instances in the company accounts to detect EC2 instance lifecycle changes. Configure the Lambda function to write a notification message to the SQS queue in the main account if the function detects an EC2 instance lifecycle change. Add an Amazon EventBridge scheduled rule that invokes the Lambda function every minute.
- DConfigure the permissions on the main account event bus to receive events from all accounts. Create an Amazon EventBridge rule in each account to send all the EC2 instance lifecycle events to the main account event bus. Add an EventBridge rule to the main account event bus that matches all EC2 instance lifecycle events. Set the SQS queue as a target for the rule.Most Voted
Correct Answer:
D
D

The best approach to collect EC2 lifecycle events from multiple AWS accounts into a single SQS queue in the main account is through a centralized event handling system. Option D is the most suitable as it allows the aggregation of all events within the main account's EventBridge, which then forwards these events to a singular SQS queue. This method leverages EventBridge's capability to receive events from multiple accounts, ensuring efficient event management and minimizing configuration redundancy across accounts. This centralized approach not only streamlines setup but also simplifies ongoing management and monitoring of events.
send
light_mode
delete
Question #3
An application is using Amazon Cognito user pools and identity pools for secure access. A developer wants to integrate the user-specific file upload and download features in the application with Amazon S3. The developer must ensure that the files are saved and retrieved in a secure manner and that users can access only their own files. The file sizes range from 3 KB to 300 MB.
Which option will meet these requirements with the HIGHEST level of security?
Which option will meet these requirements with the HIGHEST level of security?
- AUse S3 Event Notifications to validate the file upload and download requests and update the user interface (UI).
- BSave the details of the uploaded files in a separate Amazon DynamoDB table. Filter the list of files in the user interface (UI) by comparing the current user ID with the user ID associated with the file in the table.
- CUse Amazon API Gateway and an AWS Lambda function to upload and download files. Validate each request in the Lambda function before performing the requested operation.
- DUse an IAM policy within the Amazon Cognito identity prefix to restrict users to use their own folders in Amazon S3.Most Voted
Correct Answer:
D
D

Using an IAM policy configured with an Amazon Cognito identity prefix is an excellent method to ensure users can access only their folders in Amazon S3. This method leverages built-in AWS mechanisms to manage access, reducing the need for custom code and the associated risks. Such a setup not only reinforces security but also streamlines management by directly linking a user's identity to their access rights, as detailed in AWS documentation and best practices. This approach significantly minimizes potential security risks compared to other options like custom solutions or UI-based controls.
send
light_mode
delete
Question #4
A company is building a scalable data management solution by using AWS services to improve the speed and agility of development. The solution will ingest large volumes of data from various sources and will process this data through multiple business rules and transformations.
The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance.
Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?
The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance.
Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?
- AAWS Batch
- BAWS Step FunctionsMost Voted
- CAWS Glue
- DAWS Lambda
Correct Answer:
D
D

To address the requirements of sequential execution and error handling in a scalable data management solution, AWS Step Functions is the optimal choice. This service adeptly handles intricate workflows, managing dependencies and reprocessing needs effectively. It supports automatic error retries that align with the need for robust error handling, ensuring processes continue smoothly even when disruptions occur. AWS Lambda, while powerful for executing code in response to events, lacks inherent capabilities for orchestrating extensive workflows, making Step Functions the more fitting option for this scenario.
send
light_mode
delete
Question #5
A developer has created an AWS Lambda function that is written in Python. The Lambda function reads data from objects in Amazon S3 and writes data to an Amazon DynamoDB table. The function is successfully invoked from an S3 event notification when an object is created. However, the function fails when it attempts to write to the DynamoDB table.
What is the MOST likely cause of this issue?
What is the MOST likely cause of this issue?
- AThe Lambda function's concurrency limit has been exceeded.
- BDynamoDB table requires a global secondary index (GSI) to support writes.
- CThe Lambda function does not have IAM permissions to write to DynamoDB.Most Voted
- DThe DynamoDB table is not running in the same Availability Zone as the Lambda function.
Correct Answer:
D
D

It looks like there's been some confusion about the correct answer to this Lambda-DynamoDB interaction issue. Upon reviewing the setup, the most plausible reason for a failure in writing data to a DynamoDB table from a Lambda function triggered by S3 would indeed be related to IAM permissions. Lambda functions need explicit permissions to interact with other AWS services, and if the function lacks the necessary permissions to write to DynamoDB, this would result in a failure. DynamoDB tables do not depend on AZ alignment with Lambda, as Lambda functions are designed to operate across multiple Availability Zones ensuring high availability. Therefore, ensure the Lambda function has the appropriate IAM permissions for actions on DynamoDB.
send
light_mode
delete
Question #6
A developer is creating an AWS CloudFormation template to deploy Amazon EC2 instances across multiple AWS accounts. The developer must choose the EC2 instances from a list of approved instance types.
How can the developer incorporate the list of approved instance types in the CloudFormation template?
How can the developer incorporate the list of approved instance types in the CloudFormation template?
- ACreate a separate CloudFormation template for each EC2 instance type in the list.
- BIn the Resources section of the CloudFormation template, create resources for each EC2 instance type in the list.
- CIn the CloudFormation template, create a separate parameter for each EC2 instance type in the list.
- DIn the CloudFormation template, create a parameter with the list of EC2 instance types as AllowedValues.Most Voted
Correct Answer:
D
D

When utilizing AWS CloudFormation to manage EC2 instances across multiple accounts, option D is indeed the best practice. By specifying a parameter in the template and defining 'AllowedValues' as a list of approved EC2 instance types, you ensure that users can only launch instances that adhere to your predefined standards. This method enhances control and maintains the integrity of your cloud environment by restricting selections to the approved list, greatly simplifying management and scale operations without increasing template complexity. It’s an efficient way to enforce policies while providing flexibility within controlled parameters.
send
light_mode
delete
Question #7
A developer has an application that makes batch requests directly to Amazon DynamoDB by using the BatchGetItem low-level API operation. The responses frequently return values in the UnprocessedKeys element.
Which actions should the developer take to increase the resiliency of the application when the batch response includes values in UnprocessedKeys? (Choose two.)
Which actions should the developer take to increase the resiliency of the application when the batch response includes values in UnprocessedKeys? (Choose two.)
- ARetry the batch operation immediately.
- BRetry the batch operation with exponential backoff and randomized delay.Most Voted
- CUpdate the application to use an AWS software development kit (AWS SDK) to make the requests.Most Voted
- DIncrease the provisioned read capacity of the DynamoDB tables that the operation accesses.
- EIncrease the provisioned write capacity of the DynamoDB tables that the operation accesses.
Correct Answer:
BD
BD

To enhance the resiliency of the application in handling batch requests with DynamoDB when encountering values in the UnprocessedKeys element, two effective strategies can be implemented. Firstly, applying an exponential backoff with a randomized delay will significantly decrease the likelihood of request throttling by distributing the retries over time. This avoids overwhelming the service with repeated requests in quick succession. Secondly, increasing the provisioned read capacity of the affected DynamoDB tables is essential, as frequent appearances of UnprocessedKeys typically indicate that the current capacity is insufficient to manage the volume of data retrieval requested by the application. This combination of retry strategy and capacity adjustment addresses the root causes of the issue, promoting a more robust and efficient interaction with the database.
send
light_mode
delete
Question #8
A company is running a custom application on a set of on-premises Linux servers that are accessed using Amazon API Gateway. AWS X-Ray tracing has been enabled on the API test stage.
How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?
How can a developer enable X-Ray tracing on the on-premises servers with the LEAST amount of configuration?
- AInstall and run the X-Ray SDK on the on-premises servers to capture and relay the data to the X-Ray service.
- BInstall and run the X-Ray daemon on the on-premises servers to capture and relay the data to the X-Ray service.Most Voted
- CCapture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTraceSegments API call.
- DCapture incoming requests on-premises and configure an AWS Lambda function to pull, process, and relay relevant data to X-Ray using the PutTelemetryRecords API call.
Correct Answer:
B
B

To easily enable X-Ray tracing on on-premises servers with minimal configuration, install and run the AWS X-Ray daemon. This approach leverages the daemon's capability to listen for traffic, gather raw segment data, and efficiently relay it to the AWS X-Ray API without the need for extensive setup or additional code. This ensures that your existing setup requires minimal alterations to integrate with the powerful monitoring and troubleshooting tools offered by AWS X-Ray.
send
light_mode
delete
Question #9
A company wants to share information with a third party. The third party has an HTTP API endpoint that the company can use to share the information. The company has the required API key to access the HTTP API.
The company needs a way to manage the API key by using code. The integration of the API key with the application code cannot affect application performance.
Which solution will meet these requirements MOST securely?
The company needs a way to manage the API key by using code. The integration of the API key with the application code cannot affect application performance.
Which solution will meet these requirements MOST securely?
- AStore the API credentials in AWS Secrets Manager. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.Most Voted
- BStore the API credentials in a local code variable. Push the code to a secure Git repository. Use the local code variable at runtime to make the API call.
- CStore the API credentials as an object in a private Amazon S3 bucket. Restrict access to the S3 object by using IAM policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
- DStore the API credentials in an Amazon DynamoDB table. Restrict access to the table by using resource-based policies. Retrieve the API credentials at runtime by using the AWS SDK. Use the credentials to make the API call.
Correct Answer:
B
B

Storing API credentials in a local code variable, as suggested by option B, is a common security mispractice due to the high risk of exposure. Instead, AWS Secrets Manager, as described in option A, is specifically designed for secure secret storage and management, seamlessly integrating with the AWS SDK. This method does not affect application performance as it manages and retrieves the credentials securely at runtime. Secrets Manager also ensures that the credentials are encrypted in transit and at rest, greatly minimizing security risks compared to other methods suggested. Thus, option A would be a more appropriate and secure solution for managing API keys.
send
light_mode
delete
Question #10
A developer is deploying a new application to Amazon Elastic Container Service (Amazon ECS). The developer needs to securely store and retrieve different types of variables. These variables include authentication information for a remote API, the URL for the API, and credentials. The authentication information and API URL must be available to all current and future deployed versions of the application across development, testing, and production environments.
How should the developer retrieve the variables with the FEWEST application changes?
How should the developer retrieve the variables with the FEWEST application changes?
- AUpdate the application to retrieve the variables from AWS Systems Manager Parameter Store. Use unique paths in Parameter Store for each variable in each environment. Store the credentials in AWS Secrets Manager in each environment.Most Voted
- BUpdate the application to retrieve the variables from AWS Key Management Service (AWS KMS). Store the API URL and credentials as unique keys for each environment.
- CUpdate the application to retrieve the variables from an encrypted file that is stored with the application. Store the API URL and credentials in unique files for each environment.
- DUpdate the application to retrieve the variables from each of the deployed environments. Define the authentication information and API URL in the ECS task definition as unique names during the deployment process.
Correct Answer:
B
B

Option B, using AWS Key Management Service (AWS KMS), isn’t the best match for the question’s requirements. AWS KMS is tailored towards key management for encryption and decryption, not for directly storing and managing configuration details like API URLs or credentials. The more fitting approach is through AWS Systems Manager Parameter Store and AWS Secrets Manager as they are designed specifically to manage these types of sensitive information securely. They also facilitate easy integration and retrieval within applications, ensuring simpler management across different environments without frequent changes to the application’s codebase for each setting adjustment.
send
light_mode
delete
All Pages