Amazon AWS Certified Security - Specialty SCS-C02 Exam Practice Questions (P. 1)
- Full Access (307 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
A company has an AWS Lambda function that creates image thumbnails from larger images. The Lambda function needs read and write access to an Amazon S3 bucket in the same AWS account.
Which solutions will provide the Lambda function this access? (Choose two.)
Which solutions will provide the Lambda function this access? (Choose two.)
- ACreate an IAM user that has only programmatic access. Create a new access key pair. Add environmental variables to the Lambda function with the access key ID and secret access key. Modify the Lambda function to use the environmental variables at run time during communication with Amazon S3.
- BGenerate an Amazon EC2 key pair. Store the private key in AWS Secrets Manager. Modify the Lambda function to retrieve the private key from Secrets Manager and to use the private key during communication with Amazon S3.
- CCreate an IAM role for the Lambda function. Attach an IAM policy that allows access to the S3 bucket.Most Voted
- DCreate an IAM role for the Lambda function. Attach a bucket policy to the S3 bucket to allow access. Specify the function's IAM role as the principal.Most Voted
- ECreate a security group. Attach the security group to the Lambda function. Attach a bucket policy that allows access to the S3 bucket through the security group ID.
Correct Answer:
BE
BE

Using an IAM role for the Lambda function practice is the best and most secure way to grant access permissions to an AWS Lambda function. Specifically, options C and D are correct as they involve creating an IAM role and assigning appropriate permissions either through an attached IAM policy that allows access directly to the S3 bucket or by referencing the role in a bucket policy. This method avoids exposing sensitive credentials and ensures that permissions can be managed centrally through IAM, adhering to the principle of least privilege. It's important to note that using environmental variables or security groups for this purpose is not recommended or standard in AWS security practices.
send
light_mode
delete
Question #2
A security engineer is configuring a new website that is named example.com. The security engineer wants to secure communications with the website by requiring users to connect to example.com through HTTPS.
Which of the following is a valid option for storing SSL/TLS certificates?
Which of the following is a valid option for storing SSL/TLS certificates?
- ACustom SSL certificate that is stored in AWS Key Management Service (AWS KMS)
- BDefault SSL certificate that is stored in Amazon CloudFront
- CCustom SSL certificate that is stored in AWS Certificate Manager (ACM)Most Voted
- DDefault SSL certificate that is stored in Amazon S3
Correct Answer:
C
C

Absolutely, ACM is indeed the spot for stashing your SSL certificates. It's specifically designed to manage such certificates and AWS tightly integrates it with services like Elastic Load Balancing, Amazon CloudFront, and more, to automate tasks including deployment, renewal, and secure maintenance. This setup effectively streamlines securing your website communications through HTTPS, making it a straightforward and secure option.
send
light_mode
delete
Question #3
A security engineer needs to develop a process to investigate and respond to potential security events on a company's Amazon EC2 instances. All the EC2 instances are backed by Amazon Elastic Block Store (Amazon EBS). The company uses AWS Systems Manager to manage all the EC2 instances and has installed Systems Manager Agent (SSM Agent) on all the EC2 instances.
The process that the security engineer is developing must comply with AWS security best practices and must meet the following requirements:
A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes.
A compromised EC2 instance's metadata must be updated with corresponding incident ticket information.
A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware.
Any investigative activity during the collection of volatile data must be captured as part of the process.
Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose three.)
The process that the security engineer is developing must comply with AWS security best practices and must meet the following requirements:
A compromised EC2 instance's volatile memory and non-volatile memory must be preserved for forensic purposes.
A compromised EC2 instance's metadata must be updated with corresponding incident ticket information.
A compromised EC2 instance must remain online during the investigation but must be isolated to prevent the spread of malware.
Any investigative activity during the collection of volatile data must be captured as part of the process.
Which combination of steps should the security engineer take to meet these requirements with the LEAST operational overhead? (Choose three.)
- AGather any relevant metadata for the compromised EC2 instance. Enable termination protection. Isolate the instance by updating the instance's security groups to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.Most Voted
- BGather any relevant metadata for the compromised EC2 instance. Enable termination protection. Move the instance to an isolation subnet that denies all source and destination traffic. Associate the instance with the subnet to restrict access. Detach the instance from any Auto Scaling groups that the instance is a member of. Deregister the instance from any Elastic Load Balancing (ELB) resources.
- CUse Systems Manager Run Command to invoke scripts that collect volatile data.Most Voted
- DEstablish a Linux SSH or Windows Remote Desktop Protocol (RDP) session to the compromised EC2 instance to invoke scripts that collect volatile data.
- ECreate a snapshot of the compromised EC2 instance's EBS volume for follow-up investigations. Tag the instance with any relevant metadata and incident ticket information.Most Voted
- FCreate a Systems Manager State Manager association to generate an EBS volume snapshot of the compromised EC2 instance. Tag the instance with any relevant metadata and incident ticket information.
Correct Answer:
BCE
BCE

The correct steps, B, C, and E, efficiently address the requirements for handling a compromised EC2 instance with minimal operational overhead. Option B strategically isolates the instance by moving it to a subnet that blocks all traffic, which is a common misunderstanding. While an EC2 instance's association with a subnet cannot be changed post-creation, the instance can be effectively isolated by adjusting its security group and network ACL settings within the same subnet. This mistake often arises from misinterpreting AWS's flexibility in network configurations. Options C and E utilize AWS Systems Manager and EBS snapshots to collect necessary data without shutting down the system, ensuring ongoing access and integrity during forensic analysis. These methods are preferred for their direct compatibility with AWS tools and their ability to maintain system uptime, crucial for thorough investigation.
send
light_mode
delete
Question #4
A company has an organization in AWS Organizations. The company wants to use AWS CloudFormation StackSets in the organization to deploy various AWS design patterns into environments. These patterns consist of Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, Amazon RDS databases, and Amazon Elastic Kubernetes Service (Amazon EKS) clusters or Amazon Elastic Container Service (Amazon ECS) clusters.
Currently, the company’s developers can create their own CloudFormation stacks to increase the overall speed of delivery. A centralized CI/CD pipeline in a shared services AWS account deploys each CloudFormation stack.
The company's security team has already provided requirements for each service in accordance with internal standards. If there are any resources that do not comply with the internal standards, the security team must receive notification to take appropriate action. The security team must implement a notification solution that gives developers the ability to maintain the same overall delivery speed that they currently have.
Which solution will meet these requirements in the MOST operationally efficient way?
Currently, the company’s developers can create their own CloudFormation stacks to increase the overall speed of delivery. A centralized CI/CD pipeline in a shared services AWS account deploys each CloudFormation stack.
The company's security team has already provided requirements for each service in accordance with internal standards. If there are any resources that do not comply with the internal standards, the security team must receive notification to take appropriate action. The security team must implement a notification solution that gives developers the ability to maintain the same overall delivery speed that they currently have.
Which solution will meet these requirements in the MOST operationally efficient way?
- ACreate an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create a custom AWS Lambda function that will run the aws cloudformation validate-template AWS CLI command on all CloudFormation templates before the build stage in the CI/CD pipeline. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
- BCreate an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the security team's email addresses to the SNS topic. Create custom rules in CloudFormation Guard for each resource configuration. In the CI/CD pipeline, before the build stage, configure a Docker image to run the cfn-guard command on the CloudFormation template. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.Most Voted
- CCreate an Amazon Simple Notification Service (Amazon SNS) topic and an Amazon Simple Queue Service (Amazon SQS) queue. Subscribe the security team's email addresses to the SNS topic. Create an Amazon S3 bucket in the shared services AWS account. Include an event notification to publish to the SQS queue when new objects are added to the S3 bucket. Require the developers to put their CloudFormation templates in the S3 bucket. Launch EC2 instances that automatically scale based on the SQS queue depth. Configure the EC2 instances to use CloudFormation Guard to scan the templates and deploy the templates if there are no issues. Configure the CI/CD pipeline to publish a notification to the SNS topic if any issues are found.
- DCreate a centralized CloudFormation stack set that includes a standard set of resources that the developers can deploy in each AWS account. Configure each CloudFormation template to meet the security requirements. For any new resources or configurations, update the CloudFormation template and send the template to the security team for review. When the review is completed, add the new CloudFormation stack to the repository for the developers to use.
Correct Answer:
B
B

The selected solution leverages AWS Lambda to automate template validation which includes both syntax and internal compliance checks before deployment in a CI/CD pipeline. This allows notifications to be efficiently managed through Amazon SNS, with minimal disruptions to the developers' workflow, ensuring security without sacrificing the delivery speed. This setup efficiently ensures that all deployed resources comply with the company's security standards while maintaining operational efficiency and developer productivity. Additionally, while CloudFormation Guard in Option B offers deeper policy compliance checks, the provided lambda function approach ideally balances between compliance enforcement and developmental agility.
send
light_mode
delete
Question #5
A company is migrating one of its legacy systems from an on-premises data center to AWS. The application server will run on AWS, but the database must remain in the on-premises data center for compliance reasons. The database is sensitive to network latency. Additionally, the data that travels between the on-premises data center and AWS must have IPsec encryption.
Which combination of AWS solutions will meet these requirements? (Choose two.)
Which combination of AWS solutions will meet these requirements? (Choose two.)
- AAWS Site-to-Site VPNMost Voted
- BAWS Direct ConnectMost Voted
- CAWS VPN CloudHub
- DVPC peering
- ENAT gateway
Correct Answer:
AB
AB

Opting for AWS Site-to-Site VPN combined with Direct Connect offers a robust solution. Site-to-Site VPN ensures the data between your on-premises database and AWS environment is encrypted using IPsec, crucial for maintaining security over internet-based connections. Meanwhile, AWS Direct Connect provides a dedicated network link, significantly reducing latency compared to typical internet connections. This setup not only meets compliance demands by securing data in transit but also addresses the application's sensitivity to network delays, ensuring efficient and secure communication between your AWS infrastructure and the on-premises database.
send
light_mode
delete
Question #6
A company has an application that uses dozens of Amazon DynamoDB tables to store data. Auditors find that the tables do not comply with the company's data protection policy.
The company's retention policy states that all data must be backed up twice each month: once at midnight on the 15th day of the month and again at midnight on the 25th day of the month. The company must retain the backups for 3 months.
Which combination of steps should a security engineer take to meet these requirements? (Choose two.)
The company's retention policy states that all data must be backed up twice each month: once at midnight on the 15th day of the month and again at midnight on the 25th day of the month. The company must retain the backups for 3 months.
Which combination of steps should a security engineer take to meet these requirements? (Choose two.)
- AUse the DynamoDB on-demand backup capability to create a backup plan. Configure a lifecycle policy to expire backups after 3 months.
- BUse AWS DataSync to create a backup plan. Add a backup rule that includes a retention period of 3 months.
- CUse AWS Backup to create a backup plan. Add a backup rule that includes a retention period of 3 months.Most Voted
- DSet the backup frequency by using a cron schedule expression. Assign each DynamoDB table to the backup plan.Most Voted
- ESet the backup frequency by using a rate schedule expression. Assign each DynamoDB table to the backup plan.
Correct Answer:
DC
DC

AWS Backup enables the setup of automatic backup plans per the specified schedule, fitting the company’s need to back up on the 15th and 25th days of each month and retain these for three months. Additionally, using a cron schedule for backup frequency eases the adherence to very specific periodic backup requirements, especially those on fixed days each month. AWS DataSync isn't appropriate here as its primary use is for data transfer rather than backup tasks. Thus, utilizing AWS Backup in combination with cron scheduled expressions provides a comprehensive solution that directly addresses the company's requirements.
send
light_mode
delete
Question #7
A company needs a security engineer to implement a scalable solution for multi-account authentication and authorization. The solution should not introduce additional user-managed architectural components. Native AWS features should be used as much as possible. The security engineer has set up AWS Organizations with all features activated and AWS IAM Identity Center (AWS Single Sign-On) enabled.
Which additional steps should the security engineer take to complete the task?
Which additional steps should the security engineer take to complete the task?
- AUse AD Connector to create users and groups for all employees that require access to AWS accounts. Assign AD Connector groups to AWS accounts and link to the IAM roles in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the AWS Directory Service user portal.
- BUse an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Assign groups to AWS accounts and link to permission sets in accordance with the employees’ job functions and access requirements. Instruct employees to access AWS accounts by using the IAM Identity Center user portal.Most Voted
- CUse an IAM Identity Center default directory to create users and groups for all employees that require access to AWS accounts. Link IAM Identity Center groups to the IAM users present in all accounts to inherit existing permissions. Instruct employees to access AWS accounts by using the IAM Identity Center user portal.
- DUse AWS Directory Service for Microsoft Active Directory to create users and groups for all employees that require access to AWS accounts. Enable AWS Management Console access in the created directory and specify IAM Identity Center as a source of information for integrated accounts and permission sets. Instruct employees to access AWS accounts by using the AWS Directory Service user portal.
Correct Answer:
B
B

In managing multi-account authentication and authorization without introducing extraneous user-managed architectural elements, leveraging AWS IAM Identity Center (formerly AWS Single Sign-On) proves most efficient. It inherently simplifies user and group creation, managing permissions, and benefits from AWS-native integration. This not only reduces administrative overhead by eliminating the need for duplicated IAM user configurations across accounts but also aligns perfectly with the use of AWS native features and streamlined operations without on-premise dependencies. Thus, the initial setup through AWS IAM Identity Center using its default directory is indeed the correct approach to satisfy these requirements.
send
light_mode
delete
Question #8
A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company's AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur.
Which solution will meet these requirements?
Which solution will meet these requirements?
- AConfigure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.
- BConfigure GuardDuty to send the event to Amazon EventBridge. Deploy an AWS WAF web ACL. Process the event with an AWS Lambda function that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance.
- CEnable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge. Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.Most Voted
- DEnable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.
Correct Answer:
C
C

The selected answer, which involves using Amazon Kinesis data stream, effectively leverages AWS services for real-time processing of GuardDuty events to automate threat response actions. By having Kinesis Data Analytics process events and trigger responses such as notifications via SNS and modifying Network ACLs to block traffic, the solution is not only responsive but also minimizes potential damages by isolating the suspicious instance swiftly. This approach is more aligned with Amazon best practices for handling real-time security events, ensuring a robust defense mechanism is in place against threats like RDP brute force attacks on EC2 instances.
send
light_mode
delete
Question #9
A company has an AWS account that hosts a production application. The company receives an email notification that Amazon GuardDuty has detected an Impact:IAMUser/AnomalousBehavior finding in the account. A security engineer needs to run the investigation playbook for this security incident and must collect and analyze the information without affecting the application.
Which solution will meet these requirements MOST quickly?
Which solution will meet these requirements MOST quickly?
- ALog in to the AWS account by using read-only credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
- BLog in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use Amazon Detective to review the API calls in context.Most Voted
- CLog in to the AWS account by using administrator credentials. Review the GuardDuty finding for details about the IAM credentials that were used. Use the IAM console to add a DenyAll policy to the IAM principal.
- DLog in to the AWS account by using read-only credentials. Review the GuardDuty finding to determine which API calls initiated the finding. Use AWS CloudTrail Insights and AWS CloudTrail Lake to review the API calls in context.
Correct Answer:
B
B

Option B is the most efficient method in this scenario for investigating the security incident reported by GuardDuty. Utilizing read-only credentials ensures that the security checkup does not interfere with application function or data integrity. Amazon Detective's integration with GuardDuty facilitates rapid analysis by providing contextual insight into the suspicious API activities, enhancing the speed and accuracy of the investigation process. This approach is not only swift but also maintains a high level of security compliance by avoiding any modifications to existing permissions or settings.
send
light_mode
delete
Question #10
Company A has an AWS account that is named Account A. Company A recently acquired Company B, which has an AWS account that is named Account B. Company B stores its files in an Amazon S3 bucket. The administrators need to give a user from Account A full access to the S3 bucket in Account B.
After the administrators adjust the IAM permissions for the user in Account A to access the S3 bucket in Account B, the user still cannot access any files in the S3 bucket.
Which solution will resolve this issue?
After the administrators adjust the IAM permissions for the user in Account A to access the S3 bucket in Account B, the user still cannot access any files in the S3 bucket.
Which solution will resolve this issue?
- AIn Account B, create a bucket ACL to allow the user from Account A to access the S3 bucket in Account B.
- BIn Account B, create an object ACL to allow the user from Account A to access all the objects in the S3 bucket in Account B.
- CIn Account B, create a bucket policy to allow the user from Account A to access the S3 bucket in Account B.Most Voted
- DIn Account B, create a user policy to allow the user from Account A to access the S3 bucket in Account B.
Correct Answer:
A
A

The correct approach here involves setting the appropriate access permissions at the S3 bucket policy level, as multiple users suggested. For cross-account accessibility, the S3 bucket policy in Account B should be configured to identify and allow actions from the principal (user/account ID) in Account A. This setup directly aligns permissions with the bucket, facilitating seamless access without modifying individual object ACLs or requiring complex user policy configurations. It's a common practice that ensures broader and more manageable access controls for resources shared across different AWS accounts.
send
light_mode
delete
All Pages