Oracle 1z0-449 Exam Practice Questions (P. 1)
- Full Access (72 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
You need to place the results of a PigLatin script into an HDFS output directory.
What is the correct syntax in Apache Pig?
What is the correct syntax in Apache Pig?
- Aupdate hdfs set D as ‘./output’;
- Bstore D into ‘./output’;
- Cplace D into ‘./output’;
- Dwrite D as ‘./output’;
- Ehdfsstore D into ‘./output’;
Correct Answer:
B
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE for production scripts and batch mode processing.
Syntax: STORE alias INTO 'directory' [USING function];
Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.
A = LOAD 'data' AS (a1:int,a2:int,a3:int);
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
STORE A INTO 'myoutput' USING PigStorage ('*');
CAT myoutput;
1*2*3
4*2*1
8*3*4
4*3*3
7*2*5
8*4*3
References:
https://pig.apache.org/docs/r0.13.0/basic.html#store
B
Use the STORE operator to run (execute) Pig Latin statements and save (persist) results to the file system. Use STORE for production scripts and batch mode processing.
Syntax: STORE alias INTO 'directory' [USING function];
Example: In this example data is stored using PigStorage and the asterisk character (*) as the field delimiter.
A = LOAD 'data' AS (a1:int,a2:int,a3:int);
DUMP A;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
STORE A INTO 'myoutput' USING PigStorage ('*');
CAT myoutput;
1*2*3
4*2*1
8*3*4
4*3*3
7*2*5
8*4*3
References:
https://pig.apache.org/docs/r0.13.0/basic.html#store
send
light_mode
delete
Question #2
How is Oracle Loader for Hadoop (OLH) better than Apache Sqoop?
- AOLH performs a great deal of preprocessing of the data on Hadoop before loading it into the database.
- BOLH performs a great deal of preprocessing of the data on the Oracle database before loading it into NoSQL.
- COLH does not use MapReduce to process any of the data, thereby increasing performance.
- DOLH performs a great deal of preprocessing of the data on the Oracle database before loading it into Hadoop.
- EOLH is fully supported on the Big Data Appliance. Apache Sqoop is not supported on the Big Data Appliance.
Correct Answer:
A
Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.
Oracle Loader for Hadoop prepartitions the data if necessary and transforms it into a database-ready format. It optionally sorts records by primary key or user- defined columns before loading the data or creating output files.
Note: Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
Incorrect Answers:
A, D: Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.
C: Oracle Loader for Hadoop is a MapReduce application that is invoked as a command-line utility. It accepts the generic command-line options that are supported by the org.apache.hadoop.util.Tool interface.
E: The Oracle Linux operating system and Cloudera's Distribution including Apache Hadoop (CDH) underlie all other software components installed on Oracle Big
Data Appliance. CDH includes Apache projects for MapReduce and HDFS, such as Hive, Pig, Oozie, ZooKeeper, HBase, Sqoop, and Spark.
References:
https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG326 https://docs.oracle.com/cd/E55905_01/doc.40/e55814/concepts.htm#BIGUG117
A
Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.
Oracle Loader for Hadoop prepartitions the data if necessary and transforms it into a database-ready format. It optionally sorts records by primary key or user- defined columns before loading the data or creating output files.
Note: Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.
Incorrect Answers:
A, D: Oracle Loader for Hadoop provides an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database.
C: Oracle Loader for Hadoop is a MapReduce application that is invoked as a command-line utility. It accepts the generic command-line options that are supported by the org.apache.hadoop.util.Tool interface.
E: The Oracle Linux operating system and Cloudera's Distribution including Apache Hadoop (CDH) underlie all other software components installed on Oracle Big
Data Appliance. CDH includes Apache projects for MapReduce and HDFS, such as Hive, Pig, Oozie, ZooKeeper, HBase, Sqoop, and Spark.
References:
https://docs.oracle.com/cd/E37231_01/doc.20/e36961/start.htm#BDCUG326 https://docs.oracle.com/cd/E55905_01/doc.40/e55814/concepts.htm#BIGUG117
send
light_mode
delete
Question #3
Which three pieces of hardware are present on each node of the Big Data Appliance? (Choose three.)
- Ahigh capacity SAS disks
- Bmemory
- Credundant Power Delivery Units
- DInfiniBand ports
- EInfiniBand leaf switches
Correct Answer:
ABD
Big Data Appliance Hardware Specification and Details, example:
Per Node:
✑ 2 x Eight-Core Intel Xeon E5-2260 Processors (2.2 GHz)
✑ 64 GB Memory (expandable to 256GB)
✑ Disk Controller HBA with 512MB Battery backed write cache
✑ 12 x 3TB 7,200 RPM High Capacity SAS Disks
✑ 2 x QDR (Quad Data Rate InfiniBand)(40Gb/s) Ports
✑ 4 x 10 Gb Ethernet Ports
✑ 1 x ILOM Ethernet Port
References:
http://www.oracle.com/technetwork/server-storage/engineered-systems/bigdata-appliance/overview/bigdataappliancev2-datasheet-1871638.pdf
ABD
Big Data Appliance Hardware Specification and Details, example:
Per Node:
✑ 2 x Eight-Core Intel Xeon E5-2260 Processors (2.2 GHz)
✑ 64 GB Memory (expandable to 256GB)
✑ Disk Controller HBA with 512MB Battery backed write cache
✑ 12 x 3TB 7,200 RPM High Capacity SAS Disks
✑ 2 x QDR (Quad Data Rate InfiniBand)(40Gb/s) Ports
✑ 4 x 10 Gb Ethernet Ports
✑ 1 x ILOM Ethernet Port
References:
http://www.oracle.com/technetwork/server-storage/engineered-systems/bigdata-appliance/overview/bigdataappliancev2-datasheet-1871638.pdf
send
light_mode
delete
Question #4
What two actions do the following commands perform in the Oracle R Advanced Analytics for Hadoop Connector? (Choose two.) ore.connect (type="HIVE") ore.attach ()
- AConnect to Hive.
- BAttach the Hadoop libraries to R.
- CAttach the current environment to the search path of R.
- DConnect to NoSQL via Hive.
Correct Answer:
AC
You can connect to Hive and manage objects using R functions that have an ore prefix, such as ore.connect.
To attach the current environment into search path of R use:
ore.attach()
References:
https://docs.oracle.com/cd/E49465_01/doc.23/e49333/orch.htm#BDCUG400
AC
You can connect to Hive and manage objects using R functions that have an ore prefix, such as ore.connect.
To attach the current environment into search path of R use:
ore.attach()
References:
https://docs.oracle.com/cd/E49465_01/doc.23/e49333/orch.htm#BDCUG400
send
light_mode
delete
Question #5
Your customers security team needs to understand how the Oracle Loader for Hadoop Connector writes data to the Oracle database.
Which service performs the actual writing?
Which service performs the actual writing?
- AOLH agent
- Breduce tasks
- Cwrite tasks
- Dmap tasks
- ENameNode
Correct Answer:
B
Oracle Loader for Hadoop has online and offline load options. In the online load option, the data is both preprocessed and loaded into the database as part of the
Oracle Loader for Hadoop job. Each reduce task makes a connection to Oracle Database, loading into the database in parallel. The database has to be available during the execution of Oracle Loader for Hadoop.
References:
http://www.oracle.com/technetwork/bdc/hadoop-loader/connectors-hdfs-wp-1674035.pdf
B
Oracle Loader for Hadoop has online and offline load options. In the online load option, the data is both preprocessed and loaded into the database as part of the
Oracle Loader for Hadoop job. Each reduce task makes a connection to Oracle Database, loading into the database in parallel. The database has to be available during the execution of Oracle Loader for Hadoop.
References:
http://www.oracle.com/technetwork/bdc/hadoop-loader/connectors-hdfs-wp-1674035.pdf
send
light_mode
delete
All Pages