How to evaluate sum with one million summands?What are the most common pitfalls awaiting new users?Plotting a Double SumHow to correctly implement in a new function the scoping behavior of Table, Sum and other commands that use Block to localize iterators?Mathematica Infinite Summation ConfusionGet Mathematica to Apply Chu-Vandermonde ConvolutionFinding given sums over one variable in sums over multiple variablesHow to work out sums symbolically?Why is $sumlimits_k=1^100 0.01 $ different than $sumlimits_k=1^100 frac1100$?Perform an explicit summation, if possible, over a restricted range of two indices both varying from 0 to infinityHandling cases of cross terms for multi-sumsHow to optimize Mathematica code that depends on eigenvalues of big matrices and big sums?
Why do Russians almost not use verbs of possession akin to "have"?
Dad jokes are fun
Time complexity of an algorithm: Is it important to state the base of the logarithm?
How was Daenerys able to legitimise Gendry?
Is keeping the forking link on a true fork necessary (Github/GPL)?
Best shape for a necromancer's undead minions for battle?
Freedom of Speech and Assembly in China
Shorten or merge multiple lines of `&> /dev/null &`
Burned out due to current job, Can I take a week of vacation between jobs?
Sorting with IComparable design
How does the Earth's center produce heat?
Is it legal to have an abortion in another state or abroad?
Can we assume that a hash function with high collision resistance also means highly uniform distribution?
On San Andreas Speedruns, why do players blow up the Picador in the mission Ryder?
Why is this integration method not valid?
How can I properly write this equation in Latex?
Does "was machen sie" have the greeting meaning of "what do you do"?
Final exams: What is the most common protocol for scheduling?
What is the use case for non-breathable waterproof pants?
Creating second map without labels using QGIS?
How to respond to an e-mail asking me to suggest a doctoral research topic?
Expected maximum number of unpaired socks
Does an eye for an eye mean monetary compensation?
Why does Bran want to find Drogon?
How to evaluate sum with one million summands?
What are the most common pitfalls awaiting new users?Plotting a Double SumHow to correctly implement in a new function the scoping behavior of Table, Sum and other commands that use Block to localize iterators?Mathematica Infinite Summation ConfusionGet Mathematica to Apply Chu-Vandermonde ConvolutionFinding given sums over one variable in sums over multiple variablesHow to work out sums symbolically?Why is $sumlimits_k=1^100 0.01 $ different than $sumlimits_k=1^100 frac1100$?Perform an explicit summation, if possible, over a restricted range of two indices both varying from 0 to infinityHandling cases of cross terms for multi-sumsHow to optimize Mathematica code that depends on eigenvalues of big matrices and big sums?
$begingroup$
For a research project I am working on currently, I need to do a very simple and straightforward calculation. Unfortunately, I do not know how to include Mathematica code here, but it is very short anyway:
b[x_] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Sum[bp[k/n]* bp[q*k/n], k, 0, n - 1]
Now I need to compare two values of d[n,q]
, in particular, I need to calculated[1346269,514229]
and d[1346269,1137064]
to see which one is larger. It works perfectly fine for smaller numbers, e.g. I tried d[75025, 28657]
and got the correct result in a reasonable amount of time. However, when I tried evaluating d[1346269,514229]
after some time I got the result
(1/9760128332100732436)(4744910246749618660646829 -
4880064166050366218 Hold[$ConditionHold[$ConditionHold[
System`Dump`AutoLoad[Hold[Sum`InfiniteSum],
Hold[Sum`InfiniteSum, Sum`SumInfiniteRationalSeries,
Sum`SumInfiniteRationalExponentialSeries,
Sum`SumInfiniteLogarithmicSeries,
Sum`SumInfiniteBernoulliSeries,
Sum`SumInfiniteFibonacciSeries, Sum`SumInfiniteLucasLSeries,
Sum`SumInfiniteArcTangentSeries,
Sum`SumInfiniteArcCotangentSeries,
Sum`SumInfiniteqArcTangentSeries,
Sum`SumInfiniteqArcCotangentSeries,
Sum`SumInfiniteStirlingNumberSeries,
Sum`SumInfiniteqRationalSeries], "Sum`InfiniteSum`"]]]][(1 -
Ceiling[(1 - Sum`FiniteSumDump`l)/1346269] +
Floor[(1346268 - Sum`FiniteSumDump`l)/1346269]) Mod[(
514229 Sum`FiniteSumDump`l)/1346269, 1], Sum`FiniteSumDump`l,
0, 1346268, True])
Now, I am not too familiar with Mathematica, so I am not sure where the problem is exactly. However, I would need the two results of d[1346269,1137064]
and d[1346269,514229]
exactly (i.e. not numerically) as they are super close together, so any rounding could already change the results sufficiently much to alter the order of the two. Is there any way of computing those sums symbolically?
summation
$endgroup$
add a comment |
$begingroup$
For a research project I am working on currently, I need to do a very simple and straightforward calculation. Unfortunately, I do not know how to include Mathematica code here, but it is very short anyway:
b[x_] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Sum[bp[k/n]* bp[q*k/n], k, 0, n - 1]
Now I need to compare two values of d[n,q]
, in particular, I need to calculated[1346269,514229]
and d[1346269,1137064]
to see which one is larger. It works perfectly fine for smaller numbers, e.g. I tried d[75025, 28657]
and got the correct result in a reasonable amount of time. However, when I tried evaluating d[1346269,514229]
after some time I got the result
(1/9760128332100732436)(4744910246749618660646829 -
4880064166050366218 Hold[$ConditionHold[$ConditionHold[
System`Dump`AutoLoad[Hold[Sum`InfiniteSum],
Hold[Sum`InfiniteSum, Sum`SumInfiniteRationalSeries,
Sum`SumInfiniteRationalExponentialSeries,
Sum`SumInfiniteLogarithmicSeries,
Sum`SumInfiniteBernoulliSeries,
Sum`SumInfiniteFibonacciSeries, Sum`SumInfiniteLucasLSeries,
Sum`SumInfiniteArcTangentSeries,
Sum`SumInfiniteArcCotangentSeries,
Sum`SumInfiniteqArcTangentSeries,
Sum`SumInfiniteqArcCotangentSeries,
Sum`SumInfiniteStirlingNumberSeries,
Sum`SumInfiniteqRationalSeries], "Sum`InfiniteSum`"]]]][(1 -
Ceiling[(1 - Sum`FiniteSumDump`l)/1346269] +
Floor[(1346268 - Sum`FiniteSumDump`l)/1346269]) Mod[(
514229 Sum`FiniteSumDump`l)/1346269, 1], Sum`FiniteSumDump`l,
0, 1346268, True])
Now, I am not too familiar with Mathematica, so I am not sure where the problem is exactly. However, I would need the two results of d[1346269,1137064]
and d[1346269,514229]
exactly (i.e. not numerically) as they are super close together, so any rounding could already change the results sufficiently much to alter the order of the two. Is there any way of computing those sums symbolically?
summation
$endgroup$
add a comment |
$begingroup$
For a research project I am working on currently, I need to do a very simple and straightforward calculation. Unfortunately, I do not know how to include Mathematica code here, but it is very short anyway:
b[x_] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Sum[bp[k/n]* bp[q*k/n], k, 0, n - 1]
Now I need to compare two values of d[n,q]
, in particular, I need to calculated[1346269,514229]
and d[1346269,1137064]
to see which one is larger. It works perfectly fine for smaller numbers, e.g. I tried d[75025, 28657]
and got the correct result in a reasonable amount of time. However, when I tried evaluating d[1346269,514229]
after some time I got the result
(1/9760128332100732436)(4744910246749618660646829 -
4880064166050366218 Hold[$ConditionHold[$ConditionHold[
System`Dump`AutoLoad[Hold[Sum`InfiniteSum],
Hold[Sum`InfiniteSum, Sum`SumInfiniteRationalSeries,
Sum`SumInfiniteRationalExponentialSeries,
Sum`SumInfiniteLogarithmicSeries,
Sum`SumInfiniteBernoulliSeries,
Sum`SumInfiniteFibonacciSeries, Sum`SumInfiniteLucasLSeries,
Sum`SumInfiniteArcTangentSeries,
Sum`SumInfiniteArcCotangentSeries,
Sum`SumInfiniteqArcTangentSeries,
Sum`SumInfiniteqArcCotangentSeries,
Sum`SumInfiniteStirlingNumberSeries,
Sum`SumInfiniteqRationalSeries], "Sum`InfiniteSum`"]]]][(1 -
Ceiling[(1 - Sum`FiniteSumDump`l)/1346269] +
Floor[(1346268 - Sum`FiniteSumDump`l)/1346269]) Mod[(
514229 Sum`FiniteSumDump`l)/1346269, 1], Sum`FiniteSumDump`l,
0, 1346268, True])
Now, I am not too familiar with Mathematica, so I am not sure where the problem is exactly. However, I would need the two results of d[1346269,1137064]
and d[1346269,514229]
exactly (i.e. not numerically) as they are super close together, so any rounding could already change the results sufficiently much to alter the order of the two. Is there any way of computing those sums symbolically?
summation
$endgroup$
For a research project I am working on currently, I need to do a very simple and straightforward calculation. Unfortunately, I do not know how to include Mathematica code here, but it is very short anyway:
b[x_] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Sum[bp[k/n]* bp[q*k/n], k, 0, n - 1]
Now I need to compare two values of d[n,q]
, in particular, I need to calculated[1346269,514229]
and d[1346269,1137064]
to see which one is larger. It works perfectly fine for smaller numbers, e.g. I tried d[75025, 28657]
and got the correct result in a reasonable amount of time. However, when I tried evaluating d[1346269,514229]
after some time I got the result
(1/9760128332100732436)(4744910246749618660646829 -
4880064166050366218 Hold[$ConditionHold[$ConditionHold[
System`Dump`AutoLoad[Hold[Sum`InfiniteSum],
Hold[Sum`InfiniteSum, Sum`SumInfiniteRationalSeries,
Sum`SumInfiniteRationalExponentialSeries,
Sum`SumInfiniteLogarithmicSeries,
Sum`SumInfiniteBernoulliSeries,
Sum`SumInfiniteFibonacciSeries, Sum`SumInfiniteLucasLSeries,
Sum`SumInfiniteArcTangentSeries,
Sum`SumInfiniteArcCotangentSeries,
Sum`SumInfiniteqArcTangentSeries,
Sum`SumInfiniteqArcCotangentSeries,
Sum`SumInfiniteStirlingNumberSeries,
Sum`SumInfiniteqRationalSeries], "Sum`InfiniteSum`"]]]][(1 -
Ceiling[(1 - Sum`FiniteSumDump`l)/1346269] +
Floor[(1346268 - Sum`FiniteSumDump`l)/1346269]) Mod[(
514229 Sum`FiniteSumDump`l)/1346269, 1], Sum`FiniteSumDump`l,
0, 1346268, True])
Now, I am not too familiar with Mathematica, so I am not sure where the problem is exactly. However, I would need the two results of d[1346269,1137064]
and d[1346269,514229]
exactly (i.e. not numerically) as they are super close together, so any rounding could already change the results sufficiently much to alter the order of the two. Is there any way of computing those sums symbolically?
summation
summation
edited May 9 at 19:34
Carl Lange
6,59911647
6,59911647
asked May 9 at 19:20
Analysis801Analysis801
433
433
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I find that this evaluates much faster. Mathematica is much faster at evaluating functions with a list of 1 million data points than it is at evaluate a function 1 million times with a single point each.
b[x] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Total[bp[Range[0, n - 1]/n] bp[q Range[0, n - 1]/n]]
d[1346269, 1137064]
This takes about 17 seconds to evaluate on my machine and gives me:
$frac14599731344021534525766739760128332100732436$
Evaluating d[1346269, 514229]
gives me:
$frac14599731343994714468596179760128332100732436$
The difference is:
$frac6705014292642440032083025183109$
Evaluated to 100 digits with N[difference, 100]
, I get
2.74792054550653169316510562953899839764312057350982296551104800194 089580643554604005134744886010933307817655482572397565420836194
$cdot 10^-7$
$endgroup$
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.Sin[Range[0, 100, 0.001]]
is faster thanTable[Sin[i], i, 0, 100, 0.001]
which is faster thanFor[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.
$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "387"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f198049%2fhow-to-evaluate-sum-with-one-million-summands%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I find that this evaluates much faster. Mathematica is much faster at evaluating functions with a list of 1 million data points than it is at evaluate a function 1 million times with a single point each.
b[x] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Total[bp[Range[0, n - 1]/n] bp[q Range[0, n - 1]/n]]
d[1346269, 1137064]
This takes about 17 seconds to evaluate on my machine and gives me:
$frac14599731344021534525766739760128332100732436$
Evaluating d[1346269, 514229]
gives me:
$frac14599731343994714468596179760128332100732436$
The difference is:
$frac6705014292642440032083025183109$
Evaluated to 100 digits with N[difference, 100]
, I get
2.74792054550653169316510562953899839764312057350982296551104800194 089580643554604005134744886010933307817655482572397565420836194
$cdot 10^-7$
$endgroup$
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.Sin[Range[0, 100, 0.001]]
is faster thanTable[Sin[i], i, 0, 100, 0.001]
which is faster thanFor[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.
$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
add a comment |
$begingroup$
I find that this evaluates much faster. Mathematica is much faster at evaluating functions with a list of 1 million data points than it is at evaluate a function 1 million times with a single point each.
b[x] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Total[bp[Range[0, n - 1]/n] bp[q Range[0, n - 1]/n]]
d[1346269, 1137064]
This takes about 17 seconds to evaluate on my machine and gives me:
$frac14599731344021534525766739760128332100732436$
Evaluating d[1346269, 514229]
gives me:
$frac14599731343994714468596179760128332100732436$
The difference is:
$frac6705014292642440032083025183109$
Evaluated to 100 digits with N[difference, 100]
, I get
2.74792054550653169316510562953899839764312057350982296551104800194 089580643554604005134744886010933307817655482572397565420836194
$cdot 10^-7$
$endgroup$
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.Sin[Range[0, 100, 0.001]]
is faster thanTable[Sin[i], i, 0, 100, 0.001]
which is faster thanFor[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.
$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
add a comment |
$begingroup$
I find that this evaluates much faster. Mathematica is much faster at evaluating functions with a list of 1 million data points than it is at evaluate a function 1 million times with a single point each.
b[x] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Total[bp[Range[0, n - 1]/n] bp[q Range[0, n - 1]/n]]
d[1346269, 1137064]
This takes about 17 seconds to evaluate on my machine and gives me:
$frac14599731344021534525766739760128332100732436$
Evaluating d[1346269, 514229]
gives me:
$frac14599731343994714468596179760128332100732436$
The difference is:
$frac6705014292642440032083025183109$
Evaluated to 100 digits with N[difference, 100]
, I get
2.74792054550653169316510562953899839764312057350982296551104800194 089580643554604005134744886010933307817655482572397565420836194
$cdot 10^-7$
$endgroup$
I find that this evaluates much faster. Mathematica is much faster at evaluating functions with a list of 1 million data points than it is at evaluate a function 1 million times with a single point each.
b[x] := x^2 - x + 1/2
bp[x_] := b[Mod[x, 1]]
d[n_, q_] := Total[bp[Range[0, n - 1]/n] bp[q Range[0, n - 1]/n]]
d[1346269, 1137064]
This takes about 17 seconds to evaluate on my machine and gives me:
$frac14599731344021534525766739760128332100732436$
Evaluating d[1346269, 514229]
gives me:
$frac14599731343994714468596179760128332100732436$
The difference is:
$frac6705014292642440032083025183109$
Evaluated to 100 digits with N[difference, 100]
, I get
2.74792054550653169316510562953899839764312057350982296551104800194 089580643554604005134744886010933307817655482572397565420836194
$cdot 10^-7$
answered May 9 at 19:55
MassDefectMassDefect
2,775311
2,775311
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.Sin[Range[0, 100, 0.001]]
is faster thanTable[Sin[i], i, 0, 100, 0.001]
which is faster thanFor[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.
$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
add a comment |
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.Sin[Range[0, 100, 0.001]]
is faster thanTable[Sin[i], i, 0, 100, 0.001]
which is faster thanFor[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.
$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
Your point about the way lists are interacted with via functions during evaluation is going to be quite impactful for me! Thank you :) do you have a good example or explanation as to how the list is evaluated differently than running the evaluation some N number of times? I understand the difference in how much code you’d have to write, but it seemed to me that would only affect the memory taken up, and not change the speed of evaluation. Is the difference in having to access it some N amount of times, reloading the function each time?
$endgroup$
– CA Trevillian
May 10 at 3:28
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
I have tried it your way now and indeed it is much faster. Which is really useful, since in fact not only do I need to compute some d[n,q], but for fixed n I need to compute Table[d[n, q], q, 0, Ceiling[n/2]] and then find the smallest element. Is there another way of speeding up the computation of the entire table as well? I thought of first computing Table[bp[k/n],k,0,n-1] and then just reading the function values from that table. But since bp is not a very complicated function, I was not sure whether this is significantly faster. But is there some way of speeding this up?
$endgroup$
– Analysis801
May 10 at 6:09
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.
Sin[Range[0, 100, 0.001]]
is faster than Table[Sin[i], i, 0, 100, 0.001]
which is faster than For[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@CATrevillian mathematica.stackexchange.com/questions/18393/… provides a little background and links to a good question/answer by Mr. Wizard. I don't understand all the details behind why it's faster, but have often found using this feature speeds up code for large amounts of data.
Sin[Range[0, 100, 0.001]]
is faster than Table[Sin[i], i, 0, 100, 0.001]
which is faster than For[i = 0, i <= 100, i = i + 0.001, Sin[i]]
.$endgroup$
– MassDefect
May 10 at 18:03
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@Analysis801 Hmmm... I'm not sure. I think that's probably worthy of its own question, if you haven't already asked one. Each one is a pretty major calculation and will take quite some time, so to do half a million of them would be tough.
$endgroup$
– MassDefect
May 10 at 18:10
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
$begingroup$
@MassDefect wow! I unfortunately didn’t even know you could do that! Now I fortunately do. The link is awesome and your examples are perfect in illustrating the differences. I’m excited to do some new timing tests :)
$endgroup$
– CA Trevillian
May 11 at 1:49
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f198049%2fhow-to-evaluate-sum-with-one-million-summands%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown