Snowflake SnowPro Advanced Architect Exam Practice Questions (P. 5)
- Full Access (109 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #21
How can this data be shared?
- AThe healthcare company will need to change the institute’s Snowflake edition in the accounts panel.
- BBy default, sharing is supported from a Business Critical Snowflake edition to a Standard edition.
- CContact Snowflake and they will execute the share request for the healthcare company.
- DSet the share_restriction parameter on the shared object to false.Most Voted
C

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #22
Which of the following is recommended for optimizing the cost associated with the Snowflake Kafka connector?
- AUtilize a higher Buffer.flush.time in the connector configuration.Most Voted
- BUtilize a higher Buffer.size.bytes in the connector configuration.
- CUtilize a lower Buffer.size.bytes in the connector configuration.
- DUtilize a lower Buffer.count.records in the connector configuration.
D

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #23
The QA account is intended to run and test changes on data and database objects before pushing those changes to the Production account. It is a requirement that all database objects and data in the QA account need to be an exact copy of the database objects, including privileges and data in the Production account on at least a nightly basis.
Which is the LEAST complex approach to use to populate the QA account with the Production account’s data and database objects on a nightly basis?
- A1. Create a share in the Production account for each database
2. Share access to the QA account as a Consumer
3. The QA account creates a database directly from each share
4. Create clones of those databases on a nightly basis
5. Run tests directly on those cloned databases - B1. Create a stage in the Production account
2. Create a stage in the QA account that points to the same external object-storage location
3. Create a task that runs nightly to unload each table in the Production account into the stage
4. Use Snowpipe to populate the QA account - C1. Enable replication for each database in the Production account
2. Create replica databases in the QA account
3. Create clones of the replica databases on a nightly basis
4. Run tests directly on those cloned databasesMost Voted - D1. In the Production account, create an external function that connects into the QA account and returns all the data for one specific table
2. Run the external function as part of a stored procedure that loops through each table in the Production account and populates each table in the QA account
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #24
- AACCOUNTADMIN, SECURITYADMIN
- BSYSADMIN, SECURITYADMIN
- CACCOUNTADMIN, USER with PRIVILEGEMost Voted
- DSECURITYADMIN, USER with PRIVILEGE
A

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #25
The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.
Which design will meet these requirements?
- AIngest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.
- BIngest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.Most Voted
- CIngest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.
- DIngest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.
B

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
All Pages