Amazon AWS Certified Database - Specialty Exam Practice Questions (P. 2)
- Full Access (359 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #11
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality.
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect
WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?
- AQuickly rewind the DB cluster to a point in time before the release using Backtrack.Most Voted
- BPerform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
- CRestore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
- DCreate a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
Correct Answer:
D
D

Option D is indeed the optimal approach here. By creating a clone of the DB cluster and using Backtrack on this clone to rewind to just before the erroneous DELETE command, this method enables the recovery of the specific lost data without impacting the original database's other recent valid transactions made during those four hours. This strategy efficiently addresses the requirement to minimize data loss while restoring the database to its correct state as swiftly as possible.
send
light_mode
delete
Question #12
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
- AReview the stack drift before modifying the template
- BCreate and review a change set before applying itMost Voted
- CExport the database resources as stack outputs
- DDefine the database resources in a nested stack
- ESet a stack policy for the database resourcesMost Voted
Correct Answer:
AD
AD

To ensure that modifications made by the Application team do not impact the deployed Amazon RDS database resources, the Database Specialist should primarily utilize the stack drift review and a nested stack. The stack drift allows the identification of unintentional drift between the template's defined configuration and the current resource disposition. Meanwhile, employing a nested stack for database resources ensures a layer of isolation, thus safeguarding them from unintentional changes during broader updates in the primary stack. This method respects the boundaries of stack changes, maintaining the integrity of critical database resources.
send
light_mode
delete
Question #13
A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
- AUse the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.Most Voted
- BCreate an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
- CEdit and enable Aurora DB cluster cache management in parameter groups.Most Voted
- DSet TCP keepalive parameters to a high value.
- ESet JDBC connection string timeout variables to a low value.Most Voted
- FSet Java DNS caching timeouts to a high value.
Correct Answer:
ABC
ABC

Using the designated read and write Aurora endpoints ensures efficient rerouting during failovers because these endpoints automatically update to point to the new primary instance. Employing Amazon CloudWatch to trigger a restore in another availability zone is crucial because it provides a mechanism for rapid recovery if the primary DB becomes inaccessible. Additionally, enabling and managing Aurora DB cluster cache in parameter groups is a practical approach for maintaining synchronized data across instances. This setup ensures that after a failover, the newly promoted primary instance can access a pre-warmed cache, significantly reducing potential high-load issues and application downtime.
send
light_mode
delete
Question #14
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS
KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?
KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?
- ACreate a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
- BCreate a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
- CEnable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.Most Voted
- DCreate a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
Correct Answer:
A
Reference:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
A
Reference:
https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html

send
light_mode
delete
Question #15
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six- node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?
- ASplit up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
- BReview all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
- CUse the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
- DSet up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.Most Voted
Correct Answer:
D
D

Setting up automatic scaling on the Amazon Aurora DB cluster, as outlined in option D, effectively addresses the fluctuating workload requirements with minimal user intervention. Aurora Auto Scaling adjusts the number of reader nodes dynamically, increasing them during high reporting demands and decreasing when not needed. This scalable functionality ensures that the cluster meets workload demands efficiently without manual adjustments or over-provisioning, thus optimizing costs while maintaining performance.
send
light_mode
delete
Question #16
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies.
The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?
The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?
- ASet up NACLs that allow the entire EC2 subnet to access the DB instance
- BDisable the master user account
- CSet up a security group that blocks SSH to the DB instance
- DSet up RDS to use SSL for data in transitMost Voted
Correct Answer:
D
Reference:
https://aws.amazon.com/blogs/database/applying-best-practices-for-securing-sensitive-data-in-amazon-rds/
D
Reference:
https://aws.amazon.com/blogs/database/applying-best-practices-for-securing-sensitive-data-in-amazon-rds/

send
light_mode
delete
Question #17
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?
Which solution meets these requirements?
- ALeverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
- BLeverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
- CLeverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.Most Voted
- DLeverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
Correct Answer:
C
C

Amazon Redshift's Concurrency Scaling feature is ideal for environments with fluctuating query demands, making it a top choice for businesses that need responsive, cost-effective data handling. By enabling Concurrency Scaling, additional cluster resources are dynamically provisioned to handle increased concurrent read queries efficiently, ensuring optimal performance without the need for manual intervention in resource allocation. This functionality ensures that users experience quick responses even under high demand, crucial for analyzing current-year data swiftly while also allowing access to extensive historical data sets stored economically on Amazon S3.
send
light_mode
delete
Question #18
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A
Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?
Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?
- ACreate an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
- BCreate an AWS CloudFormation template and deploy the template to all the Regions.
- CCreate an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.Most Voted
- DCreate DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.
Correct Answer:
B
B

The correct solution to automate the deployment and management of DynamoDB tables across multiple regions while ensuring uniform configuration is using AWS CloudFormation StackSets. StackSets extend CloudFormation's functionalities, enabling simultaneous creation, update, or deletion of stacks in different regions and accounts from a single operation. This approach not only streamlines deployments but also maintains configuration consistency without manual intervention in each region, making it an ideal choice for scenarios requiring synchronized multi-regional deployments.
send
light_mode
delete
Question #19
A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.
How can the Database Specialists accomplish this?
How can the Database Specialists accomplish this?
- AEnable the option to push all database logs to Amazon CloudWatch for advanced analysis
- BCreate appropriate Amazon CloudWatch dashboards to contain specific periods of time
- CEnable Amazon RDS Performance Insights and review the appropriate dashboardMost Voted
- DEnable Enhanced Monitoring will the appropriate settings
Correct Answer:
C
C

To effectively narrow down the list of wait events in a MySQL DB instance on Amazon RDS, enabling Amazon RDS Performance Insights is crucial. This tool provides a specialized dashboard that intuitively displays database performance metrics, including wait events, facilitating deep analysis and identification of bottlenecks in database performance. This option specifically targets database-level insights, rather than simply monitoring at the OS level or general metrics evaluation.
send
light_mode
delete
Question #20
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?
What should the company do to achieve this in the shortest amount of time?
- AUse a blue-green deployment with a complete application-level failover test
- BUse the RDS console to reboot the DB instance by choosing the option to reboot with failoverMost Voted
- CUse RDS fault injection queries to simulate the primary node failure
- DAdd a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Correct Answer:
C
Reference:
https://wellarchitectedlabs.com/Reliability/300_Testing_for_Resiliency_of_EC2_RDS_and_S3/Lab_Guide.html
C
Reference:
https://wellarchitectedlabs.com/Reliability/300_Testing_for_Resiliency_of_EC2_RDS_and_S3/Lab_Guide.html

send
light_mode
delete
All Pages