Weight TF-IDF

evanshevansh MemberPosts:6Contributor I
edited April 2020 inHelp
Hey all,

I'm using the Process Documents operator to output a tokenized word vector for each document, with the TF-IDF calculated. I'd also like to weight the TF-IDF by the number of tokens in each document. I have the number of tokens (Num_Tokens) calculated for each document, but I can't figure out a way to divide TF-IDF by Num_Tokens for each term in each document. Any tips? Thanks!
Tagged:

Answers

  • evanshevansh MemberPosts:6Contributor I
    Bump. Could really use a hand.
  • JEdwardJEdward RapidMiner Certified Analyst, RapidMiner Certified Expert, MemberPosts:578Unicorn
    I'm a little tired this morning, but if I'm reading correctly you have

    TFIDF calculated for each attribute in your dataset with each example representing one document.
    An additional attribute showing the number of tokens in each document.

    And you want to calculate the TFIDF / Num_tokens for each example & each attribute?

    If this is the right interpretation I'd recommend using Generate Attributes inside a Loop Attributes operator.
    This will loop through all your TFIDF attributes and then (using a macro) you can divide that attribute by your num_tokens value.

    Hope that helps
  • evanshevansh MemberPosts:6Contributor I
    This is perfect; thank you so much for the help!
  • evanshevansh MemberPosts:6Contributor I
    Had a follow up question on the same process, so I figured I'd open this back up. The above solution does exactly what I need it to do. However, my example set has around 160 million data points, and so the loop attributes operator takes almost a day to run. I get that my data set is large, but I wouldn't think that performing 160 million divisions should take nearly 24 hours, am I missing something? Is there any way to make this run more efficiently?
Sign InorRegisterto comment.