Are there any documented algorithms to separate sections of a given dataset into different curves of best fit?
For example, most humans looking at this chart of data would readily divide it into 3 parts: a sinusoidal segment, a linear segment, and the inverse exponential segment. In fact, I made this particular one with a sine wave, a line and a simple exponential formula.
Are there existing algorithms for finding parts like that, which can then be separately fitted to various curves/lines to make a kind of compound series of best-fits of subsets of the data?
Note that although the example has the ends of the segments pretty much line up, this won’t necessarily be the case; there may also be a sudden jolt in the values at a segment cutoff. Perhaps those cases will be easier to detect.
Update: Here is an image of a small bit of real-world data:
Update 2: here is an unusually small real-world set of data (only 509 data points):
Here it is, charted, with the appoximate position of some known real-world element edges marked with dotted lines, a luxury we won’t normally have:
One luxury we do have, however, is hindsight: the data in my case is not a time series, but is rather spatially related; it only makes sense to analyse a whole dataset (usually 5000 – 15000 data points) at once, not in an ongoing manner.
My interpretation of the question is that the OP is looking for methodologies that would fit the shape(s) of the examples provided, not the HAC residuals. In addition, automated routines that don’t require significant human or analyst intervention are desired. Box-Jenkins may not be appropriate, despite their emphasis in this thread, since they do require substantial analyst involvement.
R modules exist for this type of non-moment based, pattern matching. Permutation distribution clustering is such a pattern matching technique developed by a Max Planck Institute scientist that meets the criteria you’ve outlined. Its application is to time series data, but it’s not limited to that. Here’s a citation for the R module that’s been developed:
pdc: An R Package for Complexity-Based Clustering of Time Series by Andreas Brandmaier
In addition to PDC, there’s the machine learning, iSax routine developed by Eamon Keogh at UC Irvine that’s also worth comparison.
Finally, there’s this paper on Data Smashing: Uncovering Lurking Order in Data by Chattopadhyay and Lipson. Beyond the clever title, there is a serious purpose at work. Here’s the abstract:
“From automatic speech recognition to discovering unusual stars, underlying
almost all automated discovery tasks is the ability to compare and contrast
data streams with each other, to identify connections and spot outliers. Despite the prevalence of data, however, automated methods are not keeping pace. A key bottleneck is that most data comparison algorithms today rely on a human expert to specify what ‘features’ of the data are relevant for comparison. Here, we propose a new principle for estimating the similarity between the sources of arbitrary data streams, using neither domain knowledge nor learning. We demonstrate the application of this principle to the analysis of data from a number of real-world challenging problems, including the disambiguation of electro-encephalograph patterns pertaining to epileptic seizures, detection of anomalous cardiac activity fromheart sound recordings and classification of astronomical objects from raw photometry. In all these cases and without access to any domain knowledge, we demonstrate performance on a par with the accuracy achieved by specialized algorithms and heuristics
devised by domain experts. We suggest that data smashing principles may open the door to understanding increasingly complex observations, especially when experts do not know what to look for.”
This approach goes way beyond curvilinear fit. It’s worth checking out.