如何写一个例子从本地repository to Amazon S3
I've reformatted a large data set in RapidMiner and now want to write the result to S3. However, connecting the dataset (via the Retrieve operator) to the input port of the Write Amazon S3 operator results in the error: Wrong connection - Your connection is producing the wrong type of data. Try changing the starting point of the connection.
The Write Amazon S3 operator only seems to work when I feed it a file from my computer using the Open File operator. But the files I need are stored as binary .ioo and .md files on my computer, and when I try uploading either of these to S3, and then reading them back, they are nonsense.
Could anyone suggest anything? I've also tried writing to Redshift using the Write Database command, but it goes extremely slowly to the point of crashing RapidMiner. I know that my upload speed isn't the problem as I'm running RapidMiner off a server with a 700MB upload speed. Many thanks in advance!
Best Answer
-
JEdward RapidMiner Certified Analyst, RapidMiner Certified Expert, MemberPosts:578Unicorn
One thing you might want to do is write the data into a format such as CSV before uploading it to S3.
Also, I haven't tested the RedShift upload & download speed via JDBC, but let's assume there is some sort of issue and there is a bottleneck making it run slowly for both the above suggested process & the Redshift.
In that case you can spinup a small EMR cluster and connect to it with RapidMiner Radoop. Then using Radoop put a Read CSV operator stream your data into AWS and finally use a Write Database or Store in Hive operator to write it to S3.
See here for an article on Store in Hive with customer storage handlers.
Custom storage handlers on Hadoop when using Radoop "Store in Hive"
1
Answers
Thank you@JEdward! Of course, it seems obvious now that I needed to be feeding a CSV file to the Write S3 Operator, rather than the Example Set.