Q1> I am curious how only 3000 rows is good enough to create a fine-tuned model params file for a 200,000+ rows data file ?
Q2> I will take your advice and run the swarm for 3000 points but in general how does swarming scale in terms of distributing work across machines or in just a single machine will it just scale linearly meaning if 1000 points take 1 min , then 5000 points would take 5 mins for a single machine ?
Thanks