Amazon AWS Certified Database - Specialty Exam Practice Questions (P. 1)
- Full Access (359 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a `could not connect to server: Connection times out` error message to Amazon CloudWatch Logs.
What is the cause of this error?
When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a `could not connect to server: Connection times out` error message to Amazon CloudWatch Logs.
What is the cause of this error?
- AThe user name and password the application is using are incorrect.
- BThe security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
- CThe security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.Most Voted
- DThe user name and password are correct, but the user is not authorized to use the DB instance.
Correct Answer:
C
Reference:
https://forums.aws.amazon.com/thread.jspa?threadID=129700
C
Reference:
https://forums.aws.amazon.com/thread.jspa?threadID=129700
send
light_mode
delete
Question #2
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add
RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
- ASet DeletionProtection to TrueMost Voted
- BSet MultiAZ to True
- CSet TerminationProtection to True
- DSet DeleteAutomatedBackups to FalseMost Voted
- ESet DeletionPolicy to Delete
- FSet DeletionPolicy to RetainMost Voted
Correct Answer:
ACF
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/
ACF
Reference:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

send
light_mode
delete
Question #3
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon
RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?
RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?
- AAfter a database crash, Aurora needed to replay the redo log from the last database checkpoint
- BThe client-side application is caching the DNS data and its TTL is set too highMost Voted
- CAfter failover, the Aurora DB cluster needs time to warm up before accepting client connections
- DThere were no active Aurora Replicas in the Aurora DB cluster
Correct Answer:
C
C

The correct explanation for the connection failure post-failover in an Amazon Aurora DB cluster appears linked to the client-side DNS caching settings. In a typical setup, if the DNS Time-to-Live (TTL) setting is too high, the outdated IP address information may still be used by the client application, preventing it from connecting to the newly assigned IP of the DB instance after the failover. Properly managing the TTL settings is crucial to ensure that the DNS information is refreshed quickly enough to accommodate such backend changes without disrupting the application connectivity.
send
light_mode
delete
Question #4
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company's data center. The company's Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts' inability to connect?
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts' inability to connect?
- ARestart the DB cluster to apply the SSL change.
- BInstruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.Most Voted
- CAdd explicit mappings between the Data Analysts' IP addresses and the instance in the security group assigned to the DB cluster.
- DModify the Data Analysts' local client firewall to allow network traffic to AWS.
Correct Answer:
D
D

If the AUroral DB cluster requires SSL/TLS for connections and the Database Specialist (DSL) can access using the analysts' accounts, it's likely the issue lies with the analysts' setup, not the AWS configuration. As per the AWS documentation on Aurora's SSL requirements, when secure transport is enforced, clients must connect using an SSL connection. This requires downloading and specifying the appropriate SSL certificates within their client connections. Modifying local client firewalls might be pertinent if no desktops could connect, but since DSL's desktop connects, this indicates a configuration issue at the Data Analysts' end. Thus, instructing the Data Analysts to configure their client tools and connections for SSL, by downloading the AWS root certificate and setting the appropriate connection parameters, is a practical and necessary step to resolve their connection issues.
send
light_mode
delete
Question #5
A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?
What can the Database Specialist do to reduce the overall cost?
- ACreate a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
- BCreate a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
- CCreate a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.Most Voted
- DCreate an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
Correct Answer:
A
A

The TTL (Time to Live) approach in DynamoDB is indeed cost-effective for ensuring data is automatically deleted after a specific period. Specifically, in the context of the question where data only needs to be stored for 2 days, configuring TTL would automatically manage the deletion without the need for additional resource utilization such as AWS Glue. This not only simplifies the architecture but also reduces costs associated with data management overages and manual intervention.
send
light_mode
delete
Question #6
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?
- ACreate an Amazon CloudWatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
- BCreate an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
- CCreate RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.Most Voted
- DWrite RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Correct Answer:
C
C

RDS event subscriptions directly integrate with tracking systems to monitor specific database events like creation, deletion, backup, and shutdowns, as outlined in the AWS RDS User Guide. This method offers a streamlined setup process without requiring additional coding or infrastructure management, thus meeting the need for minimal effort. Also, while AWS Lambda with AWS CloudTrail (option B) could technically work, it would involve more steps and complexity, confirming option C as the most efficient choice for the scenario described.
send
light_mode
delete
Question #7
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely.
Which approach should the Database Specialist take to securely manage the database credentials?
Which approach should the Database Specialist take to securely manage the database credentials?
- AStore the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
- BConfigure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
- CStore the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.Most Voted
- DStore the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
Correct Answer:
B
B

For securely managing database credentials while migrating to AWS, IAM database authentication is a robust choice. By employing IAM for database access, the need to manage database passwords directly is eliminated, as authentication tokens provided by IAM have a lifespan of only 15 minutes, enhancing security. Moreover, this approach meets the requirement to rotate credentials regularly without manual updates, ensuring that credential management is both secure and efficient. This method leverages AWS's built-in capabilities to maximize both security and compliance with the company's policies.
send
light_mode
delete
Question #8
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
- AEnable in-transit and at-rest encryption on the ElastiCache cluster.Most Voted
- BEnsure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
- CEnsure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.Most Voted
- DCreate an IAM policy to allow the application service roles to access all ElastiCache API actions.
- EEnsure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.
- FEnsure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.Most Voted
Correct Answer:
ABE
Reference:
https://aws.amazon.com/getting-started/tutorials/setting-up-a-redis-cluster-with-amazon-elasticache/
ABE
Reference:
https://aws.amazon.com/getting-started/tutorials/setting-up-a-redis-cluster-with-amazon-elasticache/

send
light_mode
delete
Question #9
A company is running an Amazon RDS for PostgreSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime.
What is the FASTEST way to accomplish this?
What is the FASTEST way to accomplish this?
- ACreate an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
- BUse the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
- CCreate a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
- DMigrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.Most Voted
Correct Answer:
C
C

Based on a close examination, migration using an Aurora Replica and promoting it during the cutover, as described in Option D, indeed provides a feasible path to achieve minimal downtime, contrary to the initially marked correct answer (Option C). This approach significantly speeds up migration by continuously syncing changes, thus ensuring minimal impact and swift switchover. This method leverages the existing capabilities of Aurora PostgreSQL to handle live traffic and ongoing changes efficiently during the migration process.
send
light_mode
delete
Question #10
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle
DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?
DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?
- AIn the same Region and VPC of the source DB instance
- BIn the same Region and VPC as the target DB instance
- CIn the same VPC and Availability Zone as the target DB instanceMost Voted
- DIn the same VPC and Availability Zone as the source DB instance
Correct Answer:
D
D

Locating the AWS DMS replication instance in the same VPC and Availability Zone as the source database, as mentioned in the AWS documentation, generally provides optimal performance due to reduced network latency. This setup minimizes the data transmission time between the source database and the replication instance, crucial for maintaining the efficiency and effectiveness of the database migration process. However, consulting specific use cases and AWS guidelines is recommended to determine the best approach for different scenarios.
send
light_mode
delete
All Pages