Amazon AWS Certified Solutions Architect - Associate SAA-C03 Exam Practice Questions (P. 2)
- Full Access (1027 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #11
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?
What should a solutions architect do to accomplish this goal?
- AUse AWS Secrets Manager. Turn on automatic rotation.Most Voted
- BUse AWS Systems Manager Parameter Store. Turn on automatic rotation.
- CCreate an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
- DCreate an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.
Correct Answer:
B
B

Opting for the AWS Systems Manager Parameter Store allows the company to centrally manage access credentials with less complexity. While auto-rotation isn't a native feature, it can be configured through additional automation scripts. This minimizes administrative overhead by streamlining credential management in a secure environment. It is, however, important to note that for built-in automatic rotation, AWS Secrets Manager would be a more fitting service as it includes this functionality out-of-the-box.
send
light_mode
delete
Question #12
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
What should a solutions architect do to meet these requirements?
- ACreate an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.Most Voted
- BCreate an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
- CCreate an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
- DCreate an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.
Correct Answer:
C
C

Creating an Amazon CloudFront distribution with both the S3 bucket and the ALB as origins is a strategic approach to improve the performance and reduce latency for static and dynamic data. By using CloudFront, static content from the S3 bucket and dynamic content from the ALB are cached and delivered from edge locations closer to end users, enhancing speed and user experience. Configuring Route 53 to direct traffic to this CloudFront distribution further streamlines this process, ensuring efficient routing and optimized content delivery.
send
light_mode
delete
Question #13
A company performs monthly maintenance on its AWS infrastructure. During these maintenance activities, the company needs to rotate the credentials for its Amazon RDS for MySQL databases across multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
Which solution will meet these requirements with the LEAST operational overhead?
- AStore the credentials as secrets in AWS Secrets Manager. Use multi-Region secret replication for the required Regions. Configure Secrets Manager to rotate the secrets on a schedule.Most Voted
- BStore the credentials as secrets in AWS Systems Manager by creating a secure string parameter. Use multi-Region secret replication for the required Regions. Configure Systems Manager to rotate the secrets on a schedule.
- CStore the credentials in an Amazon S3 bucket that has server-side encryption (SSE) enabled. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Lambda function to rotate the credentials.
- DEncrypt the credentials as secrets by using AWS Key Management Service (AWS KMS) multi-Region customer managed keys. Store the secrets in an Amazon DynamoDB global table. Use an AWS Lambda function to retrieve the secrets from DynamoDB. Use the RDS API to rotate the secrets.
Correct Answer:
A
A

AWS Secrets Manager is ideal for managing and rotating credentials with minimal operational overhead. It specifically allows for the secure storage, management, and scheduled rotation of secrets, including database credentials. The multi-Region secret replication capability also ensures that credentials are consistently updated and available across multiple Regions, fulfilling the company's need during monthly maintenance without requiring additional complex steps. This setup vastly simplifies the credential management process, compared to other options that involve more manual intervention and steps.
send
light_mode
delete
Question #14
A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.
Which solution will meet these requirements?
- AUse Amazon Redshift with a single node for leader and compute functionality.
- BUse Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
- CUse Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.Most Voted
- DUse Amazon ElastiCache for Memcached with EC2 Spot Instances.
Correct Answer:
C
C

Professor's Comment:
Utilizing Amazon Aurora with a Multi-AZ deployment and Aurora Auto Scaling is indeed the optimal solution for addressing the requirements of high-read demand and high availability. The Aurora Auto Scaling feature automatically adjusts the number of Aurora Replicas to manage read workload spikes efficiently. Moreover, Aurora's architecture, fully compatible with MySQL, ensures high performance, exceeding traditional MySQL setups on RDS. The Multi-AZ deployment contributes to the high availability by maintaining synchronous replicas across different Availability Zones, making it exceptionally resilient and reliable for mission-critical applications.
send
light_mode
delete
Question #15
A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.
Which solution will meet these requirements?
Which solution will meet these requirements?
- AUse Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
- BUse Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
- CUse AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.Most Voted
- DUse AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.
Correct Answer:
C
C

AWS Network Firewall is the perfect pick here. It's pretty much a managed service that lets you whip up, manage, and monitor stateful and stateless traffic control rules. What’s cool here is that you get to enforce these rules for both incoming and outgoing network traffic in your VPC. So, imagine you can examine a bunch of data packets, see if they’re cool or sketchy, and decide what happens to them—just like your former on-prem inspection server. Other options like GuardDuty or Traffic Mirroring aren’t built for hands-on traffic filtering or detailed inspections like Network Firewall. Plus, with Network Firewall, managing and scaling gets a whole lot easier. Spot on for keeping your cloud environment tightened up and under control!
send
light_mode
delete
Question #16
A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access.
Which solution will meet these requirements?
Which solution will meet these requirements?
- ACreate an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
- BCreate an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.Most Voted
- CCreate an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
- DCreate an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
Correct Answer:
D
D
send
light_mode
delete
Question #17
A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.
What should the solutions architect do to meet this requirement?
What should the solutions architect do to meet this requirement?
- ACreate an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.Most Voted
- BCreate an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
- CCreate an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
- DCreate an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.
Correct Answer:
A
A
send
light_mode
delete
Question #18
An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
- ACreate an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.Most Voted
- BConfigure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.Most Voted
- CConfigure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed.
- DLaunch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function.
- EConfigure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon ample Notification Service (Amazon SNS) topic with the application owner's email address for further processing.
Correct Answer:
AB
AB
send
light_mode
delete
Question #19
A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.
Which solution will meet these requirements with the LEAST operational overhead?
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.
Which solution will meet these requirements with the LEAST operational overhead?
- ACreate a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
- BCreate an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
- CDeploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway.
- DDeploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.Most Voted
Correct Answer:
B
B
send
light_mode
delete
Question #20
A company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS) volumes. Modifications to the cloned data must not affect the production environment. The software that accesses this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
A solutions architect needs to minimize the time that is required to clone the production data into the test environment.
Which solution will meet these requirements?
- ATake EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store volumes in the test environment.
- BConfigure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
- CTake EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production EBS snapshots.
- DTake EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2 instances in the test environment.Most Voted
Correct Answer:
D
D
send
light_mode
delete
All Pages