Is there a specific measure for how many more classifications or signals an algorithm makes or picks up as opposed to another?

We all know precision and recall. What if two algorithms have the same precision and recall but one algorithm makes more predictions with the data available. Example: “I love samsung. Apple is terrible.” Algorithm 1: “I love samsung. Apple is terrible.” Makes prediction Neutral. Algorithm 2: “I love samsung. Apple is terrible.” Makes prediction Positive … Read more

How to describe deterministic optimisation algorithms using statistics?

I am solving a large set of nonlinear optimisation problems using different algorithms. I have compared their performance using performance profiles (see Dolan and Moré, 2002). These profiles are figures that indicate the global performance of an algorithm. They indicate what fraction of problems are solved within a factor τ of the best problems. An … Read more

LFW face pair-matching performance evaluation, why retrain model on view2? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 2 years ago. Improve this question I am trying to understand how performance evaluation works in LFW(Labeled Faces in the Wild) dataset http://vis-www.cs.umass.edu/lfw/. I am interested … Read more

Are there general parameters in which a traditional machine learning algorithm will outperform a Neural Network? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 4 years ago. Improve this question I know this is a loaded question given the infinite number of circumstances surrounding what kind of machine learning algorithm … Read more

References for “self-fulfilling prophecy bias” in machine learning needed

I am currently looking for a scientific reference (journal article, book etc) that describes the challenge of evaluating the performance of a method on the data it produces itself. For example: A trained ranking algorithm A in a production system is producing ranked result lists of documents for a set of users. These users are … Read more

When doing nested cross-validation, is it better to use n-fold or random sorting?

There are different methods for performing the outer part of nested cross-validation (i.e. outer-cross-validation) for estimating overall generalised expected classification accuracy, e.g.: Use the nested outer n-fold method. In the n-fold method, the dataset is split into ‘n’ subsets, where n-1 subsets are used for training, and the remaining 1 subset is used for testing, … Read more

Why is Keras.fit faster than TensorFlow step using same Arch and Adam? [closed]

Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it’s on-topic for Cross Validated. Closed 3 years ago. Improve this question I have the exact same model architecture, one in Keras and one in TensorFlow. The TensorFlow model is actually defined in Keras but … Read more

Overall rank from multiple ranked lists

I’ve looked through a lot of literature available online, including this forum without any luck and hoping someone can help a statistical issue I currently face: I have 5 lists of of ranked data, each containing 10 items ranked from position 1 (best) to position 10 (worst). For sake of context, the 10 items in … Read more

How to select a clustering method? How to validate a cluster solution (to warrant the method choice)?

One of the biggest issue with cluster analysis is that we may happen to have to derive different conclusion when base on different clustering methods used (including different linkage methods in hierarchical clustering). I would like to know your opinion on this – which method will you select, and how. One might say “the best … Read more

Accuracy and ROC for Logistic and Decision Tree

So I run a logistic regression and decision tree model using same data. The accuracy shows that the decision tree outperforms logistic slightly. However, my ROC curve shows that logistic is much better than decision tree. How could this happen? Answer AttributionSource : Link , Question Author : YY_H , Answer Author : Community