Possible? Yes. Useful? No. The optimizations I'm going to list here are unlikely to make more than a tiny fraction of a percent difference in the runtime. A good compiler may already do these for you.
Anyway, looking at your inner loop:
for (s=0,j=a;j<b;j+=h){
func2 = func(j+h);
s = s + 0.5*(func1+func2)*h;
func1 = func2;
}
At every loop iteration you perform three math operations that can be brought outside: adding j + h, multiplication by 0.5, and multiplication by h. The first you can fix by starting your iterator variable at a + h, and the others by factoring out the multiplications:
for (s=0, j=a+h; j<=b; j+=h){
func2 = func(j);
s += func1+func2;
func1 = func2;
}
s *= 0.5 * h;
Though I would point out that by doing this, due to floating point roundoff error it is possible to miss the last iteration of the loop. (This was also an issue in your original implementation.) To get around that, use an unsigned int or size_t counter:
size_t n;
for (s=0, n=0, j=a+h; n<N; n++, j+=h){
func2 = func(j);
s += func1+func2;
func1 = func2;
}
s *= 0.5 * h;
As Brian's answer says, your time is better spent optimizing the evaluation of the function func. If the accuracy of this method is sufficient, I doubt you'll find anything faster for the same N. (Though you could run some tests to see if e.g. Runge-Kutta lets you lower N enough that the overall integration takes less time without sacrificing accuracy.)
trapezoidal_integrationแทนที่จะtrap,sumหรือrunning_totalแทนs(และยังใช้+=แทนs = s +)trapezoid_widthหรือdxแทนh(หรือไม่ขึ้นอยู่กับโน้ตที่คุณต้องการสำหรับกฎสี่เหลี่ยมคางหมู) และการเปลี่ยนแปลงfunc1และfunc2สะท้อนให้เห็นถึงความจริงที่ว่าพวกเขาจะมีค่าไม่ได้ฟังก์ชั่น เช่นfunc1->previous_valueและfunc2->current_valueหรืออะไรทำนองนั้น