IBM C2090-101 Exam Practice Questions (P. 1)
- Full Access (106 questions)
- Six months of Premium Access
- Access to one million comments
- Seamless ChatGPT Integration
- Ability to download PDF files
- Anki Flashcard files for revision
- No Captcha & No AdSense
- Advanced Exam Configuration
Question #1
- AYou can improve the performance by increasing the number of map tasks assigned to the load
- BWhen loading large files the number of files that you load does not impact the performance of the LOAD HADOOP statement
- CYou can improve the performance by decreasing the number of map tasks that are assigned to the load and adjusting the heap size
- DIt is advantageous to run the LOAD HADOOP statement directly pointing to large files located in the host file system as opposed to copying the files to the DFS prior to load
A
Reference:
https://www.ibm.com/support/knowledgecenter/en/SSCRJT_5.0.3/com.ibm.swg.im.bigsql.doc/doc/bigsql_loadperf.html

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #2
- ABig SQL cannot be used to access the data moved in by Data Click because the data is in Hive
- BYou must import metadata for all sources and targets that you want to make available for Data Click activities
- CConnections from the relational database source to HDFS are discovered automatically from within Data Click
- DHive tables are automatically created every time you run an activity that moves data from a relational database into HDFS
- EHBase tables are automatically created every time you ran an activity that moves data from a relational database into HDFS
CE
Reference:
https://www.ibm.com/support/knowledgecenter/en/SSZJPZ_11.3.0/com.ibm.swg.im.iis.dataclick.doc/topics/hivetables.html

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #3
- AInfoSphere Streams can both read from and write data to HDFS
- BThe Streams Big Data toolkit operators that interface with HDFS uses Apache Flume to integrate with Hadoop
- CStreams applications never need to be concerned with making the data schemas consistent with those on Hadoop
- DBig SQL can be used to preprocess the data as it flows through InfoSphere Streams before the data lands in HDFS
D

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #4
- AIt is advised to use Java serialization over Kryo serialization
- BStoring the object in serialized from will lead to faster access times
- CStoring the object in serialized from will lead to slower access times
- DAll of the above
B
Reference:
https://spark.apache.org/docs/latest/rdd-programming-guide.html

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
Question #5
- AHBase is not supported as an import target
- BData imported using Sqoop is always written to a single Hive partition
- CSqoop can be used to retrieve rows newer than some previously imported set of rows
- DSqoop can only append new rows to a database table when exporting back to a database
C
Reference:
https://sqoop.apache.org/docs/1.4.1-incubating/SqoopUserGuide.html

Hi! Do you need help with this question ?
- Why isn't the A the right answer?
- Traducir la pregunta al español
Contributor get free access to an augmented ChatGPT 4 trained with the latest IT Questions.
All Pages