Oracle 1z0-449 Exam Practice Questions (P. 2)
- Full Access (72 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #6
Your customer needs to manage configuration information on the Big Data Appliance.
Which service would you choose?
Which service would you choose?
- ASparkPlug
- BApacheManager
- CZookeeper
- DHive Server
- EJobMonitor
Correct Answer:
C
The ZooKeeper utility provides configuration and state management and distributed coordination services to Dgraph nodes of the Big Data Discovery cluster. It ensures high availability of the query processing by the Dgraph nodes in the cluster.
References:
https://docs.oracle.com/cd/E57471_01/bigData.100/admin_bdd/src/cadm_cluster_zookeeper.html
C
The ZooKeeper utility provides configuration and state management and distributed coordination services to Dgraph nodes of the Big Data Discovery cluster. It ensures high availability of the query processing by the Dgraph nodes in the cluster.
References:
https://docs.oracle.com/cd/E57471_01/bigData.100/admin_bdd/src/cadm_cluster_zookeeper.html
send
light_mode
delete
Question #7
You are helping your customer troubleshoot the use of the Oracle Loader for Hadoop Connector in online mode. You have performed steps 1, 2, 4, and 5.
STEP 1: Connect to the Oracle database and create a target table.
STEP 2: Log in to the Hadoop cluster (or client).
STEP 3: Missing step -
STEP 4: Create a shell script to run the OLH job.
STEP 5: Run the OLH job.
What step is missing between step 2 and step 4?
STEP 1: Connect to the Oracle database and create a target table.
STEP 2: Log in to the Hadoop cluster (or client).
STEP 3: Missing step -
STEP 4: Create a shell script to run the OLH job.
STEP 5: Run the OLH job.
What step is missing between step 2 and step 4?
- ADiagnose the job failure and correct the error.
- BCopy the table metadata to the Hadoop system.
- CCreate an XML configuration file.
- DQuery the table to check the data.
- ECreate an OLH metadata file.
Correct Answer:
C
C
send
light_mode
delete
Question #8
The hdfs_stream script is used by the Oracle SQL Connector for HDFS to perform a specific task to access data.
What is the purpose of this script?
What is the purpose of this script?
- AIt is the preprocessor script for the Impala table.
- BIt is the preprocessor script for the HDFS external table.
- CIt is the streaming script that creates a database directory.
- DIt is the preprocessor script for the Oracle partitioned table.
- EIt defines the jar file that points to the directory where Hive is installed.
Correct Answer:
B
The hdfs_stream script is the preprocessor for the Oracle Database external table created by Oracle SQL Connector for HDFS.
References:
https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG107
B
The hdfs_stream script is the preprocessor for the Oracle Database external table created by Oracle SQL Connector for HDFS.
References:
https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG107
send
light_mode
delete
Question #9
How should you encrypt the Hadoop data that sits on disk?
- AEnable Transparent Data Encryption by using the Mammoth utility.
- BEnable HDFS Transparent Encryption by using bdacli on a Kerberos-secured cluster.
- CEnable HDFS Transparent Encryption on a non-Kerberos secured cluster.
- DEnable Audit Vault and Database Firewall for Hadoop by using the Mammoth utility.
Correct Answer:
B
HDFS Transparent Encryption protects Hadoop data thats at rest on disk. When the encryption is enabled for a cluster, data write and read operations on encrypted zones (HDFS directories) on the disk are automatically encrypted and decrypted. This process is "transparent" because it’s invisible to the application working with the data.
The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.
Incorrect Answers:
D: The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.
References:
https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-hdfs-transparent-encryption.html#GUID-16649C5A-2C88-4E75-809A-
BBF8DE250EA3
B
HDFS Transparent Encryption protects Hadoop data thats at rest on disk. When the encryption is enabled for a cluster, data write and read operations on encrypted zones (HDFS directories) on the disk are automatically encrypted and decrypted. This process is "transparent" because it’s invisible to the application working with the data.
The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.
Incorrect Answers:
D: The cluster where you want to use HDFS Transparent Encryption must have Kerberos enabled.
References:
https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-hdfs-transparent-encryption.html#GUID-16649C5A-2C88-4E75-809A-
BBF8DE250EA3
send
light_mode
delete
Question #10
What two things does the Big Data SQL push down to the storage cell on the Big Data Appliance? (Choose two.)
- ATransparent Data Encrypted data
- Bthe column selection of data from individual Hadoop nodes
- CWHERE clause evaluations
- DPL/SQL evaluation
- EBusiness Intelligence queries from connected Exalytics servers
Correct Answer:
AB
AB
send
light_mode
delete
All Pages