Amazon AWS-SysOps Exam Practice Questions (P. 5)
- Full Access (932 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #41
What are characteristics of Amazon S3? (Choose two.)
- AObjects are directly accessible via a URL
- BS3 should be used to host a relational database
- CS3 allows you to store objects or virtually unlimited size
- DS3 allows you to store virtually unlimited amounts of data
- ES3 offers Provisioned IOPS
Correct Answer:
AD
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Reference:
https://aws.amazon.com/s3/faqs/
AD
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Reference:
https://aws.amazon.com/s3/faqs/
send
light_mode
delete
Question #42
You receive a frantic call from a new DBA who accidentally dropped a table containing all your customers.
Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of when the mistake was made?
Which Amazon RDS feature will allow you to reliably restore your database to within 5 minutes of when the mistake was made?
- AMulti-AZ RDS
- BRDS snapshots
- CRDS read replicas
- DRDS automated backupMost Voted
Correct Answer:
D
Reference:
https://aws.amazon.com/rds/details/#ha
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html
D
Reference:
https://aws.amazon.com/rds/details/#ha
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIT.html
send
light_mode
delete
Question #43
A media company produces new video files on-premises every day with a total size of around 100 GBS after compression All files have a size of 1 -2 GB and need to be uploaded to Amazon S3 every night in a fixed time window between 3am and 5am Current upload takes almost 3 hours, although less than half of the available bandwidth is used.
What step(s) would ensure that the file uploads are able to complete in the allotted time window?
What step(s) would ensure that the file uploads are able to complete in the allotted time window?
- AIncrease your network bandwidth to provide faster throughput to S3
- BUpload the files in parallel to S3
- CPack all files into a single archive, upload it to S3, then extract the files in AWS
- DUse AWS Import/Export to transfer the video files
Correct Answer:
B
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
B
https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/
send
light_mode
delete
Question #44
You are running a web-application on AWS consisting of the following components an Elastic Load Balancer (ELB) an Auto-Scaling Group of EC2 instances running Linux/PHP/Apache, and Relational DataBase Service (RDS) MySQL.
Which security measures fall into AWS's responsibility?
Which security measures fall into AWS's responsibility?
- AProtect the EC2 instances against unsolicited access by enforcing the principle of least-privilege access
- BProtect against IP spoofing or packet sniffingMost Voted
- CAssure all communication between EC2 instances and ELB is encrypted
- DInstall latest security patches on ELB. RDS and EC2 instances
Correct Answer:
B
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
B
https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf
send
light_mode
delete
Question #45
You use S3 to store critical data for your company Several users within your group currently have lull permissions to your S3 buckets You need to come up with a solution mat does not impact your users and also protect against the accidental deletion of objects.
Which two options will address this issue? (Choose two.)
Which two options will address this issue? (Choose two.)
- AEnable versioning on your S3 Buckets
- BConfigure your S3 Buckets with MFA delete
- CCreate a Bucket policy and only allow read only permissions to all users at the bucket level
- DEnable object life cycle policies and configure the data older than 3 months to be archived in Glacier
Correct Answer:
AB
Versioning allows easy recovery of previous file version.
MFA delete requires additional MFA authentication to delete files.
Won't impact the users current access.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
AB
Versioning allows easy recovery of previous file version.
MFA delete requires additional MFA authentication to delete files.
Won't impact the users current access.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
send
light_mode
delete
Question #46
An organization's security policy requires multiple copies of all critical data to be replicated across at least a primary and backup data center. The organization has decided to store some critical data on Amazon S3.
Which option should you implement to ensure this requirement is met?
Which option should you implement to ensure this requirement is met?
- AUse the S3 copy API to replicate data between two S3 buckets in different regions
- BYou do not need to implement anything since S3 data is automatically replicated between regions
- CUse the S3 copy API to replicate data between two S3 buckets in different facilities within an AWS Region
- DYou do not need to implement anything since S3 data is automatically replicated between multiple facilities within an AWS Region
Correct Answer:
D
You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region.
Reference:
https://aws.amazon.com/s3/faqs/
D
You specify a region when you create your Amazon S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities. Please refer to Regional Products and Services for details of Amazon S3 service availability by region.
Reference:
https://aws.amazon.com/s3/faqs/
send
light_mode
delete
Question #47
You are tasked with setting up a cluster of EC2 Instances for a NoSQL database. The database requires random read I/O disk performance up to a 100,000 IOPS at 4KB block side per node.
Which of the following EC2 instances will perform the best for this workload?
Which of the following EC2 instances will perform the best for this workload?
- AA High-Memory Quadruple Extra Large (m2.4xlarge) with EBS-Optimized set to true and a PIOPs EBS volume
- BA Cluster Compute Eight Extra Large (cc2.8xlarge) using instance storage
- CHigh I/O Quadruple Extra Large (hi1.4xlarge) using instance storage
- DA Cluster GPU Quadruple Extra Large (cg1.4xlarge) using four separate 4000 PIOPS EBS volumes in a RAID 0 configuration
Correct Answer:
C
The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between
10,000 and 85,000 random write IOPS, both with 4K blocks.
For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS.
Reference:
https://aws.amazon.com/blogs/aws/new-high-io-ec2-instance-type-hi14xlarge/
C
The SSD storage is local to the instance. Using PV virtualization, you can expect 120,000 random read IOPS (Input/Output Operations Per Second) and between
10,000 and 85,000 random write IOPS, both with 4K blocks.
For HVM and Windows AMIs, you can expect 90,000 random read IOPS and 9,000 to 75,000 random write IOPS.
Reference:
https://aws.amazon.com/blogs/aws/new-high-io-ec2-instance-type-hi14xlarge/
send
light_mode
delete
Question #48
When an EC2 EBS-backed (EBS root) instance is stopped, what happens to the data on any ephemeral store volumes?
- AData will be deleted and win no longer be accessibleMost Voted
- BData is automatically saved in an EBS volume.
- CData is automatically saved as an EBS snapshot
- DData is unavailable until the instance is restarted
Correct Answer:
A
However, data in the instance store is lost under the following circumstances:
The underlying disk drive fails -
The instance stops -
The instance terminates -
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-lifetime
A
However, data in the instance store is lost under the following circumstances:
The underlying disk drive fails -
The instance stops -
The instance terminates -
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-lifetime
send
light_mode
delete
Question #49
Your team Is excited about the use of AWS because now they have access to programmable Infrastructure" You have been asked to manage your AWS infrastructure in a manner similar to the way you might manage application code You want to be able to deploy exact copies of different versions of your infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time
(development test QA. production).
Which approach addresses this requirement?
(development test QA. production).
Which approach addresses this requirement?
- AUse cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.
- BUse AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.
- CUse AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.
- DUse AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
Correct Answer:
D
OpsWorks for Chef Automate automatically performs updates for new Chef minor versions.
OpsWorks for Chef Automate does not perform major platform version updates automatically (for example, a major new platform version such as Chef Automate
13) because these updates might include backward-incompatible changes and require additional testing. In these cases, you must manually initiate the update.
Reference:
https://aws.amazon.com/opsworks/chefautomate/faqs/
D
OpsWorks for Chef Automate automatically performs updates for new Chef minor versions.
OpsWorks for Chef Automate does not perform major platform version updates automatically (for example, a major new platform version such as Chef Automate
13) because these updates might include backward-incompatible changes and require additional testing. In these cases, you must manually initiate the update.
Reference:
https://aws.amazon.com/opsworks/chefautomate/faqs/
send
light_mode
delete
Question #50
You have a server with a 5O0GB Amazon EBS data volume. The volume is 80% full. You need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact.
Which of the following backup methods will best fulfill your requirements?
Which of the following backup methods will best fulfill your requirements?
- ATake periodic snapshots of the EBS volume
- BUse a third party Incremental backup application to back up to Amazon Glacier
- CPeriodically back up all data to a single compressed archive and archive to Amazon S3 using a parallelized multi-part upload
- DCreate another EBS volume in the second Availability Zone attach it to the Amazon EC2 instance, and use a disk manager to mirror me two disks
Correct Answer:
A
EBS volumes can only be attached to EC2 instances within the same Availability Zone.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
A
EBS volumes can only be attached to EC2 instances within the same Availability Zone.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html
send
light_mode
delete
All Pages