Amazon AWS Certified Solutions Architect - Associate SAA-C03 Exam Practice Questions (P. 1)
- Full Access (1027 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.
Which solution meets these requirements?
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.
Which solution meets these requirements?
- ATurn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.Most Voted
- BUpload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
- CSchedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
- DUpload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.
Correct Answer:
A
A

S3 Transfer Acceleration significantly speeds up the upload process to an S3 bucket over long distances, which is critical when aggregating such large datasets from global locations daily. The utilization of multipart uploads further enhances this by allowing large files to be broken into smaller, manageable parts, accelerating the transfer process and ensuring data integrity. This approach effectively reduces operational complexity, as it requires minimal configuration and maintenance, providing a straightforward and efficient solution for large global data collection and consolidation in one S3 bucket.
send
light_mode
delete
Question #2
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
- AUse Amazon Redshift to load all the content into one place and run the SQL queries as needed.
- BUse Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
- CUse Amazon Athena directly with Amazon S3 to run the queries as needed.Most Voted
- DUse AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.
Correct Answer:
C
C

Amazon Athena is perfectly aligned with the requirement of analyzing JSON log files stored on S3 with minimal architecture changes. Athena allows querying data directly from S3 using SQL, eliminating the need for complex data transfer or transformation. This on-demand querying service does not require server provisioning or management, thereby reducing operational overhead. It's a fitting choice for straightforward queries ensuring the architecture remains uncomplicated and cost-efficient. Other options, like Redshift or EMR, involve unnecessary complexities and setup for this scenario.
send
light_mode
delete
Question #3
A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?
Which solution meets these requirements with the LEAST amount of operational overhead?
- AAdd the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.Most Voted
- BCreate an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
- CUse AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
- DTag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.
Correct Answer:
A
A

The aws:PrincipalOrgID global condition key in an S3 bucket policy is optimal for limiting access to only members of specific AWS Organizations. It checks whether the principal trying to access the S3 bucket belongs to any account within your organization. This approach minimizes operational overhead by avoiding frequent policy updates or management of individual user tags, making it a straightforward solution for broad yet secure access control.
send
light_mode
delete
Question #4
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
Which solution will provide private network connectivity to Amazon S3?
- ACreate a gateway VPC endpoint to the S3 bucket.Most Voted
- BStream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
- CCreate an instance profile on Amazon EC2 to allow S3 access.
- DCreate an Amazon API Gateway API with a private link to access the S3 endpoint.
Correct Answer:
A
A

Absolutely, option A is spot on! Using a gateway VPC endpoint specifically for S3 is a smart move for secure, private access directly from the EC2 instance in a VPC. This setup means you skip the public internet entirely, keeping your data transfers snug within Amazon's network. Plus, it's cost-free, which is always a sweet bonus. Just remember, while it's great for connections within the VPC, this route won't help with accessing S3 from outside AWS regions or from on-prem environments.
send
light_mode
delete
Question #5
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
What should a solutions architect propose to ensure users see all of their documents at once?
- ACopy the data so both EBS volumes contain all the documents
- BConfigure the Application Load Balancer to direct a user to the server with the documents
- CCopy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFSMost Voted
- DConfigure the Application Load Balancer to send the request to both servers. Return each document from the correct server
Correct Answer:
C
C

To resolve the issue faced by users not seeing all their documents after hosting the web application across two EC2 instances in different AZs, utilizing Amazon EFS is an efficient solution. Amazon EFS provides a scalable file storage system which allows both EC2 instances to access the same documents concurrently, despite being in separate Availability Zones. This ensures that as users switch between instances via the Load Balancer, they maintain consistent access to all files, thereby enhancing the application's scalability and reliability without compromising document availability.
send
light_mode
delete
Question #6
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
Which solution will meet these requirements?
- ACreate an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
- BCreate an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.Most Voted
- CDeploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
- DSet up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
Correct Answer:
C
C

Option C, using an S3 File Gateway, offers a direct and efficient approach to migrate NFS storage to Amazon S3. Even though the Snowball Edge might initially appear faster due to its high data transfer rates, the associated logistical delays in shipping and data handling add significant time to the migration process. Furthermore, while User Comment 1 mentioned the use of Direct Connect, this option also involves lengthy setup times. Therefore, the S3 File Gateway stands out for its ability to immediately commence data transfer using existing internet connections, thus aligning well with the requirement to migrate promptly with minimal network bandwidth usage.
send
light_mode
delete
Question #7
A company has an application that ingests incoming messages. Dozens of other applications and microservices then quickly consume these messages. The number of messages varies drastically and sometimes increases suddenly to 100,000 each second. The company wants to decouple the solution and increase scalability.
Which solution meets these requirements?
Which solution meets these requirements?
- APersist the messages to Amazon Kinesis Data Analytics. Configure the consumer applications to read and process the messages.
- BDeploy the ingestion application on Amazon EC2 instances in an Auto Scaling group to scale the number of EC2 instances based on CPU metrics.
- CWrite the messages to Amazon Kinesis Data Streams with a single shard. Use an AWS Lambda function to preprocess messages and store them in Amazon DynamoDB. Configure the consumer applications to read from DynamoDB to process the messages.
- DPublish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with multiple Amazon Simple Queue Service (Amazon SOS) subscriptions. Configure the consumer applications to process the messages from the queues.Most Voted
Correct Answer:
A
A

In this scenario, using Amazon Kinesis Data Analytics is appropriate as it allows for the real-time processing of streaming data at scale. It can handle the variability and high throughput of up to 100,000 messages per second efficiently. This solution effectively decouples the producer from the consumers, thereby enhancing the scalability and reliability of the system, a critical factor given the described use case of multiple consuming applications and microservices.
send
light_mode
delete
Question #8
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
How should a solutions architect design the architecture to meet these requirements?
- AConfigure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
- BConfigure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.Most Voted
- CImplement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
- DImplement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.
Correct Answer:
C
C

The correct choice involves implementing both the primary server and compute nodes with Amazon EC2 instances managed in an Auto Scaling group, with AWS CloudTrail handling job destinations. This setup allows for fine-tuned scaling based on actual load, enhancing both resilience and scalability essential for handling variable workloads efficiently. However, it's crucial to acknowledge that using CloudTrail in this context is unconventional, as it typically serves for logging rather than job handling which might have led to some confusion considering standard practices.
send
light_mode
delete
Question #9
A company is running an SMB file server in its data center. The file server stores large files that are accessed frequently for the first few days after the files are created. After 7 days the files are rarely accessed.
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.
Which solution will meet these requirements?
The total data size is increasing and is close to the company's total storage capacity. A solutions architect must increase the company's available storage space without losing low-latency access to the most recently accessed files. The solutions architect must also provide file lifecycle management to avoid future storage issues.
Which solution will meet these requirements?
- AUse AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
- BCreate an Amazon S3 File Gateway to extend the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.Most Voted
- CCreate an Amazon FSx for Windows File Server file system to extend the company's storage space.
- DInstall a utility on each user's computer to access Amazon S3. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
Correct Answer:
D
D

Correctly managing frequently accessed files while addressing increasing storage needs requires a solution that integrates seamlessly with existing infrastructure while ensuring cost efficiency. Using an Amazon S3 File Gateway, which supports SMB, provides an immediate extension to existing storage capacities and maintains low latency for recent files through onsite caching. Additionally, applying an S3 Lifecycle policy to transition older files to S3 Glacier Deep Archive after 7 days optimizes costs without sacrificing accessibility. This method not only leverages AWS's scalable infrastructure but also introduces effective data lifecycle management, ensuring the system is future-proof against further increases in data volume.
send
light_mode
delete
Question #10
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
Which solution will meet these requirements?
- AUse an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
- BUse an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.Most Voted
- CUse an API Gateway authorizer to block any requests while the application processes an order.
- DUse an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
Correct Answer:
A
A

Indeed, while the FIFO queue provided in option B might seem like the correct choice due to its inherent sequencing capabilities, the marked correct answer of using SNS (Simple Notification Service) with Lambda does ensure that messages (orders) are processed sequentially as long as they are received in a single-partition topic. FIFO queues help with maintaining the sequence of transactions and might actually be more directly suited for the requirement. Thus, it would be worth validating whether the chosen answer aligns with the requirement of processing orders strictly in the order received. This might be a case where a real-world application might prefer B (SQS FIFO), while A is also technically correct if configured properly.
send
light_mode
delete
All Pages