How do I fit a resonance curve?How to guess the correct fitting function to some data?Fitting of exponential data gives me a constant functionCurve Fitting and Multiple ExperimentsPower fit to some experimental dataShould a Gaussian Curve Always Be Drawn Symmetrically?Arrhenius Fit: Linear or exponential form?Reduced chi-squared value for noiseless spectraDamped Harmonic Curve fit and ForceConstant wind drag while falling?Removing zero-counts in exponential decay measurement
How does a dynamic QR code work?
Mathematica command that allows it to read my intentions
Different meanings of こわい
Can someone clarify Hamming's notion of important problems in relation to modern academia?
Forgetting the musical notes while performing in concert
Expand and Contract
How could indestructible materials be used in power generation?
Is it possible to map the firing of neurons in the human brain so as to stimulate artificial memories in someone else?
Does the Cone of Cold spell freeze water?
Why was Sir Cadogan fired?
OP Amp not amplifying audio signal
Is this draw by repetition?
How to install cross-compiler on Ubuntu 18.04?
Theorists sure want true answers to this!
What reasons are there for a Capitalist to oppose a 100% inheritance tax?
My ex-girlfriend uses my Apple ID to log in to her iPad. Do I have to give her my Apple ID password to reset it?
Where would I need my direct neural interface to be implanted?
In Bayesian inference, why are some terms dropped from the posterior predictive?
Did 'Cinema Songs' exist during Hiranyakshipu's time?
Why didn't Boeing produce its own regional jet?
How seriously should I take size and weight limits of hand luggage?
How to prevent "they're falling in love" trope
What do you call someone who asks many questions?
Rotate ASCII Art by 45 Degrees
How do I fit a resonance curve?
How to guess the correct fitting function to some data?Fitting of exponential data gives me a constant functionCurve Fitting and Multiple ExperimentsPower fit to some experimental dataShould a Gaussian Curve Always Be Drawn Symmetrically?Arrhenius Fit: Linear or exponential form?Reduced chi-squared value for noiseless spectraDamped Harmonic Curve fit and ForceConstant wind drag while falling?Removing zero-counts in exponential decay measurement
$begingroup$
In an experiment, I collected data points $ (ω,υ(ω))$ that are modeled by the equation:
$$ υ(ω)=fracωCsqrt(ω^2-ω_0^2)^2+γ^2ω^2 ,.$$
How can do I fit the data to the above correlation? And how can I extract $γ$ through this process?
experimental-physics correlation-functions data-analysis
$endgroup$
add a comment |
$begingroup$
In an experiment, I collected data points $ (ω,υ(ω))$ that are modeled by the equation:
$$ υ(ω)=fracωCsqrt(ω^2-ω_0^2)^2+γ^2ω^2 ,.$$
How can do I fit the data to the above correlation? And how can I extract $γ$ through this process?
experimental-physics correlation-functions data-analysis
$endgroup$
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
3
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday
add a comment |
$begingroup$
In an experiment, I collected data points $ (ω,υ(ω))$ that are modeled by the equation:
$$ υ(ω)=fracωCsqrt(ω^2-ω_0^2)^2+γ^2ω^2 ,.$$
How can do I fit the data to the above correlation? And how can I extract $γ$ through this process?
experimental-physics correlation-functions data-analysis
$endgroup$
In an experiment, I collected data points $ (ω,υ(ω))$ that are modeled by the equation:
$$ υ(ω)=fracωCsqrt(ω^2-ω_0^2)^2+γ^2ω^2 ,.$$
How can do I fit the data to the above correlation? And how can I extract $γ$ through this process?
experimental-physics correlation-functions data-analysis
experimental-physics correlation-functions data-analysis
edited yesterday
knzhou
45.7k11122220
45.7k11122220
asked 2 days ago
Andreas MastronikolisAndreas Mastronikolis
876
876
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
3
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday
add a comment |
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
3
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
3
3
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday
add a comment |
4 Answers
4
active
oldest
votes
$begingroup$
What you want to find is the parameters $theta=(C, omega_0, gamma)$ that minimizes the difference between $nu(omega|theta)$ (the curve given the parameters) and the measured $nu_i$ values.
The most popular method is least mean square fitting, which minimizes the sum of the squares of the differences. One can also do it by formulating the normal equations and solve it as a (potentially big) linear equation system. Another approach is the Gauss-Newton algorithm, a simple iterative method to do it. It is a good exercise to implement the solution oneself, but once you have done it once or twice it is best to rely on some software package.
Note that this kind of fitting works well when you know the functional form (your equation for $nu(omega)$), since you can ensure only that the parameters that matter are included. If you try to fit some general polynomial or function you can get overfitting (some complex curve that fits all the data but has nothing to do with your problem) besides the problem of identifying the parameters you care about.
$endgroup$
add a comment |
$begingroup$
Don't try using any general-purpose curve fitting algorithm for this.
The form of your function looks like a frequency response function, with the two unknown parameters $omega_0$ and $gamma$ - i.e. the resonant frequency, and the damping parameter. The function you specified omits an important feature if this is measured data, namely the relative phase between the "force" driving the oscillation and the response.
If you didn't measure the phase at each frequency, repeat the experiment, because that is critical information.
When you have the amplitude and phase data, there are curve fitting techniques devised specifically for this problem of "system identification" in experimental modal analysis. A simple one is the so-called "circle fitting" method. If you make a Nyquist plot of your measured data (i.e. plot imaginary part of the response against the real part), the section of the curve near the resonance is a circle, and you can fit a circle to the measured data and find the parameters from it.
In practice, a simplistic approach assuming the system only has one resonance often doesn't work well, because the response of a real system near resonance also includes the off-resonance response to all the other vibration modes. If the resonant frequencies are well separated and lightly damped, it is possible to correct for this while fitting "one mode at a time". If this is not the case, you need methods that can identify several resonances simultaneously from one response function.
Rather than re-invent the wheel, use existing code. The signal processing toolbox in MATLAB would be a good starting point - for example https://uk.mathworks.com/help/signal/ref/modalfit.html
$endgroup$
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
If we put:
$$Y = fracomega^2u(omega)^2$$
and
$$X = omega^2$$
the equation becomes:
$$Y =fracX^2C^2 +frac(gamma^2 - 2 omega_0^2)C^2 X + fracomega_0^4C^2$$
You can then extract the coefficients using polynomial fitting. To get the least-squares fit right, you have to compute the errors in $Y$ and $X$ for each data point from the measurement errors in $omega$ and $u(omega)$.
$endgroup$
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
add a comment |
$begingroup$
Are you looking for something like polynomial regression? The general idea is, if you have measured pairs of (x, y(x)) and you are looking for find a fit of the form:
$$y = alpha_0 + alpha_1 x + alpha_2 x^2 ...$$
You can write this in matrix form as:
$$beginbmatrix y_1 \ y_2 \ y_3 \ vdots \ y_n endbmatrix = beginbmatrix 1 & x_1 & x_1^2 & cdots \ 1 & x_2 & x_2^2 & cdots \ 1 & x_3 & x_3^2 & cdots \ vdots & vdots & vdots & vdots \ 1 & x_n &x_n^2 & cdots endbmatrix beginbmatrix alpha_0 \ alpha_1 \ alpha_2 \ vdots \ alpha_m endbmatrix$$
This can now be solved for your coefficients, $alpha_i$. That being said, and as was hinted at in your comments, I've never actually done this, and have instead used non-linear fitting functions provided by libraries.
More information on polynomial regression on the wikipedia page.
Edit: As you say in the comments, this method is only applicable if you can write your function that you wish to fit in polynomial form, which I don't think you can do for your example. In which case you are best off referring to the other answers to this question.
New contributor
$endgroup$
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "151"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f469754%2fhow-do-i-fit-a-resonance-curve%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
What you want to find is the parameters $theta=(C, omega_0, gamma)$ that minimizes the difference between $nu(omega|theta)$ (the curve given the parameters) and the measured $nu_i$ values.
The most popular method is least mean square fitting, which minimizes the sum of the squares of the differences. One can also do it by formulating the normal equations and solve it as a (potentially big) linear equation system. Another approach is the Gauss-Newton algorithm, a simple iterative method to do it. It is a good exercise to implement the solution oneself, but once you have done it once or twice it is best to rely on some software package.
Note that this kind of fitting works well when you know the functional form (your equation for $nu(omega)$), since you can ensure only that the parameters that matter are included. If you try to fit some general polynomial or function you can get overfitting (some complex curve that fits all the data but has nothing to do with your problem) besides the problem of identifying the parameters you care about.
$endgroup$
add a comment |
$begingroup$
What you want to find is the parameters $theta=(C, omega_0, gamma)$ that minimizes the difference between $nu(omega|theta)$ (the curve given the parameters) and the measured $nu_i$ values.
The most popular method is least mean square fitting, which minimizes the sum of the squares of the differences. One can also do it by formulating the normal equations and solve it as a (potentially big) linear equation system. Another approach is the Gauss-Newton algorithm, a simple iterative method to do it. It is a good exercise to implement the solution oneself, but once you have done it once or twice it is best to rely on some software package.
Note that this kind of fitting works well when you know the functional form (your equation for $nu(omega)$), since you can ensure only that the parameters that matter are included. If you try to fit some general polynomial or function you can get overfitting (some complex curve that fits all the data but has nothing to do with your problem) besides the problem of identifying the parameters you care about.
$endgroup$
add a comment |
$begingroup$
What you want to find is the parameters $theta=(C, omega_0, gamma)$ that minimizes the difference between $nu(omega|theta)$ (the curve given the parameters) and the measured $nu_i$ values.
The most popular method is least mean square fitting, which minimizes the sum of the squares of the differences. One can also do it by formulating the normal equations and solve it as a (potentially big) linear equation system. Another approach is the Gauss-Newton algorithm, a simple iterative method to do it. It is a good exercise to implement the solution oneself, but once you have done it once or twice it is best to rely on some software package.
Note that this kind of fitting works well when you know the functional form (your equation for $nu(omega)$), since you can ensure only that the parameters that matter are included. If you try to fit some general polynomial or function you can get overfitting (some complex curve that fits all the data but has nothing to do with your problem) besides the problem of identifying the parameters you care about.
$endgroup$
What you want to find is the parameters $theta=(C, omega_0, gamma)$ that minimizes the difference between $nu(omega|theta)$ (the curve given the parameters) and the measured $nu_i$ values.
The most popular method is least mean square fitting, which minimizes the sum of the squares of the differences. One can also do it by formulating the normal equations and solve it as a (potentially big) linear equation system. Another approach is the Gauss-Newton algorithm, a simple iterative method to do it. It is a good exercise to implement the solution oneself, but once you have done it once or twice it is best to rely on some software package.
Note that this kind of fitting works well when you know the functional form (your equation for $nu(omega)$), since you can ensure only that the parameters that matter are included. If you try to fit some general polynomial or function you can get overfitting (some complex curve that fits all the data but has nothing to do with your problem) besides the problem of identifying the parameters you care about.
answered 2 days ago
Anders SandbergAnders Sandberg
10k21530
10k21530
add a comment |
add a comment |
$begingroup$
Don't try using any general-purpose curve fitting algorithm for this.
The form of your function looks like a frequency response function, with the two unknown parameters $omega_0$ and $gamma$ - i.e. the resonant frequency, and the damping parameter. The function you specified omits an important feature if this is measured data, namely the relative phase between the "force" driving the oscillation and the response.
If you didn't measure the phase at each frequency, repeat the experiment, because that is critical information.
When you have the amplitude and phase data, there are curve fitting techniques devised specifically for this problem of "system identification" in experimental modal analysis. A simple one is the so-called "circle fitting" method. If you make a Nyquist plot of your measured data (i.e. plot imaginary part of the response against the real part), the section of the curve near the resonance is a circle, and you can fit a circle to the measured data and find the parameters from it.
In practice, a simplistic approach assuming the system only has one resonance often doesn't work well, because the response of a real system near resonance also includes the off-resonance response to all the other vibration modes. If the resonant frequencies are well separated and lightly damped, it is possible to correct for this while fitting "one mode at a time". If this is not the case, you need methods that can identify several resonances simultaneously from one response function.
Rather than re-invent the wheel, use existing code. The signal processing toolbox in MATLAB would be a good starting point - for example https://uk.mathworks.com/help/signal/ref/modalfit.html
$endgroup$
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
Don't try using any general-purpose curve fitting algorithm for this.
The form of your function looks like a frequency response function, with the two unknown parameters $omega_0$ and $gamma$ - i.e. the resonant frequency, and the damping parameter. The function you specified omits an important feature if this is measured data, namely the relative phase between the "force" driving the oscillation and the response.
If you didn't measure the phase at each frequency, repeat the experiment, because that is critical information.
When you have the amplitude and phase data, there are curve fitting techniques devised specifically for this problem of "system identification" in experimental modal analysis. A simple one is the so-called "circle fitting" method. If you make a Nyquist plot of your measured data (i.e. plot imaginary part of the response against the real part), the section of the curve near the resonance is a circle, and you can fit a circle to the measured data and find the parameters from it.
In practice, a simplistic approach assuming the system only has one resonance often doesn't work well, because the response of a real system near resonance also includes the off-resonance response to all the other vibration modes. If the resonant frequencies are well separated and lightly damped, it is possible to correct for this while fitting "one mode at a time". If this is not the case, you need methods that can identify several resonances simultaneously from one response function.
Rather than re-invent the wheel, use existing code. The signal processing toolbox in MATLAB would be a good starting point - for example https://uk.mathworks.com/help/signal/ref/modalfit.html
$endgroup$
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
Don't try using any general-purpose curve fitting algorithm for this.
The form of your function looks like a frequency response function, with the two unknown parameters $omega_0$ and $gamma$ - i.e. the resonant frequency, and the damping parameter. The function you specified omits an important feature if this is measured data, namely the relative phase between the "force" driving the oscillation and the response.
If you didn't measure the phase at each frequency, repeat the experiment, because that is critical information.
When you have the amplitude and phase data, there are curve fitting techniques devised specifically for this problem of "system identification" in experimental modal analysis. A simple one is the so-called "circle fitting" method. If you make a Nyquist plot of your measured data (i.e. plot imaginary part of the response against the real part), the section of the curve near the resonance is a circle, and you can fit a circle to the measured data and find the parameters from it.
In practice, a simplistic approach assuming the system only has one resonance often doesn't work well, because the response of a real system near resonance also includes the off-resonance response to all the other vibration modes. If the resonant frequencies are well separated and lightly damped, it is possible to correct for this while fitting "one mode at a time". If this is not the case, you need methods that can identify several resonances simultaneously from one response function.
Rather than re-invent the wheel, use existing code. The signal processing toolbox in MATLAB would be a good starting point - for example https://uk.mathworks.com/help/signal/ref/modalfit.html
$endgroup$
Don't try using any general-purpose curve fitting algorithm for this.
The form of your function looks like a frequency response function, with the two unknown parameters $omega_0$ and $gamma$ - i.e. the resonant frequency, and the damping parameter. The function you specified omits an important feature if this is measured data, namely the relative phase between the "force" driving the oscillation and the response.
If you didn't measure the phase at each frequency, repeat the experiment, because that is critical information.
When you have the amplitude and phase data, there are curve fitting techniques devised specifically for this problem of "system identification" in experimental modal analysis. A simple one is the so-called "circle fitting" method. If you make a Nyquist plot of your measured data (i.e. plot imaginary part of the response against the real part), the section of the curve near the resonance is a circle, and you can fit a circle to the measured data and find the parameters from it.
In practice, a simplistic approach assuming the system only has one resonance often doesn't work well, because the response of a real system near resonance also includes the off-resonance response to all the other vibration modes. If the resonant frequencies are well separated and lightly damped, it is possible to correct for this while fitting "one mode at a time". If this is not the case, you need methods that can identify several resonances simultaneously from one response function.
Rather than re-invent the wheel, use existing code. The signal processing toolbox in MATLAB would be a good starting point - for example https://uk.mathworks.com/help/signal/ref/modalfit.html
edited 2 days ago
answered 2 days ago
alephzeroalephzero
5,65621120
5,65621120
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
add a comment |
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
7
7
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
That is, of course, if the phase information is experimentally accessible. It's measurable in plenty of systems, but there are also many cases where it is either inaccessible or much more expensive to access.
$endgroup$
– Emilio Pisanty
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
$begingroup$
what is a well-known method for identifying several closely spaced resonances at the same time?
$endgroup$
– IamAStudent
2 days ago
2
2
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
$begingroup$
What are the advantages of these algorithms with respect to the general-purpose ones? What cost function do they minimize?
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
If we put:
$$Y = fracomega^2u(omega)^2$$
and
$$X = omega^2$$
the equation becomes:
$$Y =fracX^2C^2 +frac(gamma^2 - 2 omega_0^2)C^2 X + fracomega_0^4C^2$$
You can then extract the coefficients using polynomial fitting. To get the least-squares fit right, you have to compute the errors in $Y$ and $X$ for each data point from the measurement errors in $omega$ and $u(omega)$.
$endgroup$
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
add a comment |
$begingroup$
If we put:
$$Y = fracomega^2u(omega)^2$$
and
$$X = omega^2$$
the equation becomes:
$$Y =fracX^2C^2 +frac(gamma^2 - 2 omega_0^2)C^2 X + fracomega_0^4C^2$$
You can then extract the coefficients using polynomial fitting. To get the least-squares fit right, you have to compute the errors in $Y$ and $X$ for each data point from the measurement errors in $omega$ and $u(omega)$.
$endgroup$
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
add a comment |
$begingroup$
If we put:
$$Y = fracomega^2u(omega)^2$$
and
$$X = omega^2$$
the equation becomes:
$$Y =fracX^2C^2 +frac(gamma^2 - 2 omega_0^2)C^2 X + fracomega_0^4C^2$$
You can then extract the coefficients using polynomial fitting. To get the least-squares fit right, you have to compute the errors in $Y$ and $X$ for each data point from the measurement errors in $omega$ and $u(omega)$.
$endgroup$
If we put:
$$Y = fracomega^2u(omega)^2$$
and
$$X = omega^2$$
the equation becomes:
$$Y =fracX^2C^2 +frac(gamma^2 - 2 omega_0^2)C^2 X + fracomega_0^4C^2$$
You can then extract the coefficients using polynomial fitting. To get the least-squares fit right, you have to compute the errors in $Y$ and $X$ for each data point from the measurement errors in $omega$ and $u(omega)$.
answered 2 days ago
Count IblisCount Iblis
8,46411439
8,46411439
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
add a comment |
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
4
4
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
$begingroup$
This is a beautiful transformation, but it will also distort the error distributions of the data points, rendering the fit much harder as result. The preferred approach to fitting depends on your measurement errors (or error estimates) and on whether your data is binned. For unbinned data, if the $omega$ values are exact (or if the error is negligible compared to $v(omega)$), and if the errors on $v(omega)$ are drawn from a Gaussian distribution, a (non-linear) least-squares fit to your data points is hard to beat, as it will also be a maximum-likelihood fit.
$endgroup$
– tobi_s
yesterday
add a comment |
$begingroup$
Are you looking for something like polynomial regression? The general idea is, if you have measured pairs of (x, y(x)) and you are looking for find a fit of the form:
$$y = alpha_0 + alpha_1 x + alpha_2 x^2 ...$$
You can write this in matrix form as:
$$beginbmatrix y_1 \ y_2 \ y_3 \ vdots \ y_n endbmatrix = beginbmatrix 1 & x_1 & x_1^2 & cdots \ 1 & x_2 & x_2^2 & cdots \ 1 & x_3 & x_3^2 & cdots \ vdots & vdots & vdots & vdots \ 1 & x_n &x_n^2 & cdots endbmatrix beginbmatrix alpha_0 \ alpha_1 \ alpha_2 \ vdots \ alpha_m endbmatrix$$
This can now be solved for your coefficients, $alpha_i$. That being said, and as was hinted at in your comments, I've never actually done this, and have instead used non-linear fitting functions provided by libraries.
More information on polynomial regression on the wikipedia page.
Edit: As you say in the comments, this method is only applicable if you can write your function that you wish to fit in polynomial form, which I don't think you can do for your example. In which case you are best off referring to the other answers to this question.
New contributor
$endgroup$
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
Are you looking for something like polynomial regression? The general idea is, if you have measured pairs of (x, y(x)) and you are looking for find a fit of the form:
$$y = alpha_0 + alpha_1 x + alpha_2 x^2 ...$$
You can write this in matrix form as:
$$beginbmatrix y_1 \ y_2 \ y_3 \ vdots \ y_n endbmatrix = beginbmatrix 1 & x_1 & x_1^2 & cdots \ 1 & x_2 & x_2^2 & cdots \ 1 & x_3 & x_3^2 & cdots \ vdots & vdots & vdots & vdots \ 1 & x_n &x_n^2 & cdots endbmatrix beginbmatrix alpha_0 \ alpha_1 \ alpha_2 \ vdots \ alpha_m endbmatrix$$
This can now be solved for your coefficients, $alpha_i$. That being said, and as was hinted at in your comments, I've never actually done this, and have instead used non-linear fitting functions provided by libraries.
More information on polynomial regression on the wikipedia page.
Edit: As you say in the comments, this method is only applicable if you can write your function that you wish to fit in polynomial form, which I don't think you can do for your example. In which case you are best off referring to the other answers to this question.
New contributor
$endgroup$
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
add a comment |
$begingroup$
Are you looking for something like polynomial regression? The general idea is, if you have measured pairs of (x, y(x)) and you are looking for find a fit of the form:
$$y = alpha_0 + alpha_1 x + alpha_2 x^2 ...$$
You can write this in matrix form as:
$$beginbmatrix y_1 \ y_2 \ y_3 \ vdots \ y_n endbmatrix = beginbmatrix 1 & x_1 & x_1^2 & cdots \ 1 & x_2 & x_2^2 & cdots \ 1 & x_3 & x_3^2 & cdots \ vdots & vdots & vdots & vdots \ 1 & x_n &x_n^2 & cdots endbmatrix beginbmatrix alpha_0 \ alpha_1 \ alpha_2 \ vdots \ alpha_m endbmatrix$$
This can now be solved for your coefficients, $alpha_i$. That being said, and as was hinted at in your comments, I've never actually done this, and have instead used non-linear fitting functions provided by libraries.
More information on polynomial regression on the wikipedia page.
Edit: As you say in the comments, this method is only applicable if you can write your function that you wish to fit in polynomial form, which I don't think you can do for your example. In which case you are best off referring to the other answers to this question.
New contributor
$endgroup$
Are you looking for something like polynomial regression? The general idea is, if you have measured pairs of (x, y(x)) and you are looking for find a fit of the form:
$$y = alpha_0 + alpha_1 x + alpha_2 x^2 ...$$
You can write this in matrix form as:
$$beginbmatrix y_1 \ y_2 \ y_3 \ vdots \ y_n endbmatrix = beginbmatrix 1 & x_1 & x_1^2 & cdots \ 1 & x_2 & x_2^2 & cdots \ 1 & x_3 & x_3^2 & cdots \ vdots & vdots & vdots & vdots \ 1 & x_n &x_n^2 & cdots endbmatrix beginbmatrix alpha_0 \ alpha_1 \ alpha_2 \ vdots \ alpha_m endbmatrix$$
This can now be solved for your coefficients, $alpha_i$. That being said, and as was hinted at in your comments, I've never actually done this, and have instead used non-linear fitting functions provided by libraries.
More information on polynomial regression on the wikipedia page.
Edit: As you say in the comments, this method is only applicable if you can write your function that you wish to fit in polynomial form, which I don't think you can do for your example. In which case you are best off referring to the other answers to this question.
New contributor
edited 2 days ago
New contributor
answered 2 days ago
Anon1759Anon1759
513
513
New contributor
New contributor
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
add a comment |
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
2
2
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
The answer is yes if the equation can be reduced to a polynomial one. I don't think it can be though.
$endgroup$
– Andreas Mastronikolis
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
Then I think your only choice is to follow the advice as given in Anders Sandberg's answer and use one of the fitting techniques suggested there.
$endgroup$
– Anon1759
2 days ago
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
The formula with the Vandermonde matrix is for Linear interpolation, not for linear regeression. Or what am I missing?
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@AndreasMastronikolis You can always connect n+1 points with a Lagrange polynomial of degree n. But I doubt it makes much sense here.
$endgroup$
– Vladimir F
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
$begingroup$
@VladimirF The Vandermonde formula works also for linear regression. You just need to take the pseudoinverse $V^+ = (V^TV)^-1V^T$ of that (rectangular) matrix rather than its classical inverse, i.e., solve the system in the least-squares sense.
$endgroup$
– Federico Poloni
yesterday
add a comment |
Thanks for contributing an answer to Physics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f469754%2fhow-do-i-fit-a-resonance-curve%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
almost looks like the magnitude response of a second-order bandpass filter. other than least-squares fit (or some other $L_p$ metric of error), i dunno how else to get $gamma$. seems to me that the least-squares fit also needs to find $omega_0$ and $C$. but i think $C$ can come out in the wash.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
$C$ can be absorbed into $gamma$.
$endgroup$
– robert bristow-johnson
yesterday
$begingroup$
You said nothing about how you estimate the errors of your measured points, and what your criterion for a good fit would be. In general, you would want to maximize some likelihood function, in practice (with Gaussian 1D errors on the data points) a least-squares fit may be good enough. Your case probably involves bins ($omega$ being a continuous variable) and the Poisson distribution (few entries in bins far away from $omega_0$), so a more complex Log-Likelihood approach could be called for.
$endgroup$
– tobi_s
yesterday
3
$begingroup$
I strongly disagree that this should be posted on Mathematics, as suggested by the close votes on it. I think such questions are on topic here, but were they not then Cross Validated would be the most obvious choice.
$endgroup$
– Kyle Kanos
yesterday