「微分法」の版間の差分

削除された内容 追加された内容
Kurihaya がページ「微分法」を「微分」に移動しました: WP:RM#10月下旬(21日から月末) および ノート:微分法#分割提案 に基づく
 
en:Differential calculus 21:41, 28 October 2015‎ とりあえず冒頭のみ訳
タグ: サイズの大幅な増減
1行目:
[[File:Tangent to a curve.svg|thumb|200px|width=150|length=150|The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line equals the derivative of the function at the marked point.]]
#転送 [[微分]]
<!--{{Calculus}}-->
[[数学]]における'''微分法'''(びぶんほう、{{lang-en-short|''differential calculus''}}; 微分学)は[[微分積分学]]の分科で、量の変化に注目して研究を行う。微分法は[[積分法]]と並び、微分積分学を二分する歴史的な分野である。
 
微分法における第一の研究対象は函数の[[微分]](微分商、微分係数)、および[[函数の微分|無限小]]などの関連概念やその応用である。函数の選択された入力における微分商は入力値の近傍での函数の変化率を記述するものである。微分商を求める過程もまた、微分 (''differentiation'') と呼ばれる。幾何学的にはグラフ上の一点における微分係数は、それが存在してその点において定義されるならば、その点における[[函数のグラフ]]の[[接線]]の[[傾き]]である。一変数の{{仮リンク|実数値函数|en|real-valued function}}に対しては、一点における函数の微分は一般にその点における函数の最適{{仮リンク|線型近似|en|linear approximation}}を定める。
 
微分法と積分法を繋ぐのが[[微分積分学の基本定理]]であり、これは[[積分]]が微分の逆を行う過程であることを述べるものである。
 
微分は量を扱うほとんど全ての分野に応用を持つ。たとえば[[物理学]]において、動く物体の[[変位]]の[[時間]]に関する導函数はその物体の[[速度]]であり、速度の時間に関する導函数は[[加速度]]である。物体の[[運動量]]の導函数はその物体に及ぼされた力に等しい(この微分に関する言及を整理すれば[[運動の第2法則|ニュートンの第二法則]]に結び付けられる有名な方程式 {{math|'''F''' {{=}} ''m'''''a'''}} が導かれる)。[[化学反応]]の[[反応速度]]も導函数である。[[オペレーションズ・リサーチ]]において導函数は物資転送や工場設計の最適な応報の決定に用いられる。
 
導函数は函数の[[最大値・最小値]]を求めるのに頻繁に用いられる。導函数を含む方程式は[[微分方程式]]と呼ばれ、[[自然現象]]の記述において基本的である。微分およびその一般化は数学の多くの分野に現れ、例えば[[複素解析]]、[[函数解析学]]、[[微分幾何学]]、[[測度論]]および{{仮リンク|抽象代数学|en|abstract algebra}}などを挙げることができる。
 
== 導函数 ==
[[Image:Tangent-calculus.svg|thumb|330px|The [[tangent line]] at {{math|(''x'',''f''(''x''))}}]]
{{Main|微分}}
<!--
Suppose that {{math|''x''}} and {{math|''y''}} are [[real number]]s and that {{math|''y''}} is a [[Function (mathematics)|function]] of {{math|''x''}}, that is, for every value of {{math|''x''}}, there is a corresponding value of {{math|''y''}}. This relationship can be written as {{math|1=''y'' = ''f''(''x'')}}. If {{math|''f''(''x'')}} is the equation for a straight line (called a [[linear equation]]), then there are two real numbers {{math|''m''}} and {{math|''b''}} such that {{math|1=''y'' = ''mx'' + ''b''}}. In this "slope-intercept form", the term {{math|''m''}} is called the [[slope]] and can be determined from the formula:
 
:<math> m = \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x},</math>
where the symbol {{math|Δ}} (the uppercase form of the [[Greek alphabet|Greek]] letter [[Delta (letter)|Delta]]) is an abbreviation for "change in". It follows that {{math|1=Δ''y'' = ''m'' Δ''x''}}.
 
A general function is not a line, so it does not have a slope. Geometrically, the '''derivative of {{math|''f''}} at the point {{math|1=''x'' = ''a''}}''' is the slope of the [[tangent|tangent line]] to the function {{math|''f''}} at the point {{math|''a''}} (see figure). This is often denoted {{math|''f'' ′(''a'')}} in [[Notation for differentiation#Lagrange's notation|Lagrange's notation]] or {{math|1={{sfrac|''dy''|''dx''}}{{Pipe}}<sub>''x'' = ''a''</sub>}} in [[Leibniz's notation]]. Since the derivative is the slope of the linear approximation to {{math|''f''}} at the point {{math|''a''}}, the derivative (together with the value of {{math|''f''}} at {{math|''a''}}) determines the best linear approximation, or [[linearization]], of {{math|''f''}} near the point {{math|''a''}}.
 
If every point {{math|''a''}} in the domain of {{math|''f''}} has a derivative, there is a function that sends every point {{math|''a''}} to the derivative of {{math|''f''}} at {{math|''a''}}. For example, if {{math|1=''f''(''x'') = ''x''<sup>2</sup>}}, then the derivative function {{math|1=''f'' ′(''x'') = {{sfrac|''dy''|''dx''}} = 2''x''}}.
 
A closely related notion is the [[differential (calculus)|differential]] of a function. When {{math|''x''}} and {{math|''y''}} are real variables, the derivative of {{math|''f''}} at {{math|''x''}} is the slope of the [[tangent line]] to the graph of {{math|''f''}} at {{math|''x''}}. Because the source and target of {{math|''f''}} are one-dimensional, the derivative of {{math|''f''}} is a real number. If {{math|''x''}} and {{math|''y''}} are vectors, then the best linear approximation to the graph of {{math|''f''}} depends on how {{math|''f''}} changes in several directions at once. Taking the best linear approximation in a single direction determines a [[partial derivative]], which is usually denoted {{math|{{sfrac|∂''y''|∂''x''}}}}. The linearization of {{math|''f''}} in all directions at once is called the [[total derivative]].
-->
== 微分法の歴史 ==
<!--{{Main|History of calculus}}
 
The concept of a derivative in the sense of a [[tangent line]] is a very old one, familiar to [[Ancient Greece|Greek]] geometers such as
[[Euclid]] (c. 300 BC), [[Archimedes]] (c. 287–212 BC) and [[Apollonius of Perga]] (c. 262–190 BC).<ref>See [[Euclid's Elements]], The [[Archimedes Palimpsest]] and {{MacTutor Biography|id=Apollonius|title=Apollonius of Perga}}</ref> [[Archimedes]] also introduced the use of [[infinitesimal]]s, although these were primarily used to study areas and volumes rather than derivatives and tangents; see [[Archimedes' use of infinitesimals]].
 
The use of infinitesimals to study rates of change can be found in [[Indian mathematics]], perhaps as early as 500 AD, when the astronomer and mathematician [[Aryabhata]] (476–550) used infinitesimals to study the [[Orbit of the Moon|motion of the moon]].<ref>{{MacTutor Biography|id=Aryabhata_I|title=Aryabhata the Elder}}</ref> The use of infinitesimals to compute rates of change was developed significantly by [[Bhāskara II]] (1114–1185); indeed, it has been argued<ref>Ian G. Pearce. [http://turnbull.mcs.st-and.ac.uk/~history/Projects/Pearce/Chapters/Ch8_5.html Bhaskaracharya II.]</ref> that many of the key notions of differential calculus can be found in his work, such as "[[Rolle's theorem]]".<ref>{{Cite journal|first=T. A. A.|last=Broadbent|title=Reviewed work(s): ''The History of Ancient Indian Mathematics'' by C. N. Srinivasiengar|journal=The Mathematical Gazette|volume=52|issue=381|date=October 1968|pages=307–8|doi=10.2307/3614212|jstor=3614212|last2=Kline|first2=M.}}</ref> The [[Islamic mathematics|Persian mathematician]], [[Sharaf al-Dīn al-Tūsī]] (1135–1213), was the first to discover the [[derivative]] of [[Cubic function|cubic polynomials]], an important result in differential calculus;<ref>J. L. Berggren (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat", ''Journal of the American Oriental Society'' '''110''' (2), p. 304-309.</ref> his ''Treatise on Equations'' developed concepts related to differential calculus, such as the derivative [[Function (mathematics)|function]] and the [[maxima and minima]] of curves, in order to solve [[cubic equation]]s which may not have positive solutions.<ref name=Sharaf>{{MacTutor|id=Al-Tusi_Sharaf|title=Sharaf al-Din al-Muzaffar al-Tusi}}</ref>
 
The modern development of calculus is usually credited to [[Isaac Newton]] (1643–1727) and [[Gottfried Leibniz]] (1646–1716), who provided independent<ref>Newton began his work in 1666 and Leibniz began his in 1676. However, Leibniz published his first paper in 1684, predating Newton's publication in 1693. It is possible that Leibniz saw drafts of Newton's work in 1673 or 1676, or that Newton made use of Leibniz's work to refine his own. Both Newton and Leibniz claimed that the other plagiarized their respective works. This resulted in a bitter [[Newton v. Leibniz calculus controversy|controversy]] between the two men over who first invented calculus which shook the mathematical community in the early 18th century.</ref> and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the [[fundamental theorem of calculus]] relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes,<ref>This was a monumental achievement, even though a restricted version had been proven previously by [[James Gregory (astronomer and mathematician)|James Gregory]] (1638–1675), and some key examples can be found in the work of [[Pierre de Fermat]] (1601–1665).</ref> which had not been significantly extended since the time of [[Ibn al-Haytham]] (Alhazen).<ref name=Katz>Victor J. Katz (1995), "Ideas of Calculus in Islam and India", ''Mathematics Magazine'' '''68''' (3): 163-174 [165-9 & 173-4]</ref> For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as [[Isaac Barrow]] (1630–1677), [[René Descartes]] (1596–1650), [[Christiaan Huygens]] (1629–1695), [[Blaise Pascal]] (1623–1662) and [[John Wallis]] (1616–1703). Isaac Barrow is generally given credit for the early development of the derivative.<ref>Eves, H. (1990).</ref> Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to [[theoretical physics]], while Leibniz systematically developed much of the notation still used today.
 
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as [[Augustin Louis Cauchy]] (1789–1857), [[Bernhard Riemann]] (1826–1866), and [[Karl Weierstrass]] (1815–1897). It was also during this period that the differentiation was generalized to [[Euclidean space]] and the [[complex plane]].
-->
== 応用 ==
<!--
=== 微分方程式 ===
{{Main|Differential equation}}
 
A differential equation is a relation between a collection of functions and their derivatives. An '''[[ordinary differential equation]]''' is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A '''[[partial differential equation]]''' is a differential equation that relates functions of more than one variable to their [[partial derivative]]s. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, [[Newton's second law]], which describes the relationship between acceleration and force, can be stated as the ordinary differential equation
:<math>F(t) = m\frac{d^2x}{dt^2}.</math>
The [[heat equation]] in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation
:<math>\frac{\partial u}{\partial t} = \alpha\frac{\partial^2 u}{\partial x^2}.</math>
Here {{math|''u''(''x'',''t'')}} is the temperature of the rod at position {{math|''x''}} and time {{math|''t''}} and {{math|''α''}} is a constant that depends on how fast heat diffuses through the rod.
 
=== 平均値の定理 ===
{{Main|Mean value theorem}}
 
The mean value theorem gives a relationship between values of the derivative and values of the original function. If {{math|''f''(''x'')}} is a real-valued function and {{math|''a''}} and {{math|''b''}} are numbers with {{math|''a'' < ''b''}}, then the mean value theorem says that under mild hypotheses, the slope between the two points {{math|(''a'', ''f''(''a''))}} and {{math|(''b'', ''f''(''b''))}} is equal to the slope of the tangent line to {{math|''f''}} at some point {{math|''c''}} between {{math|''a''}} and {{math|''b''}}. In other words,
:<math>f'(c) = \frac{f(b) - f(a)}{b - a}.</math>
In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that {{math|''f''}} has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of {{math|''f''}} must equal the slope of one of the tangent lines of {{math|''f''}}. All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
 
=== テイラー展開 ===
{{Main|Taylor polynomial|Taylor series}}
 
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function {{math|''f''(''x'')}} at the point {{math|''x''<sub>0</sub>}} is a linear [[polynomial]] {{math|''a'' + ''b''(''x'' − ''x''<sub>0</sub>)}}, and it may be possible to get a better approximation by considering a quadratic polynomial {{math|''a'' + ''b''(''x'' − ''x''<sub>0</sub>) + ''c''(''x'' − ''x''<sub>0</sub>)<sup>2</sup>}}. Still better might be a cubic polynomial {{math|''a'' + ''b''(''x'' − ''x''<sub>0</sub>) + ''c''(''x'' − ''x''<sub>0</sub>)<sup>2</sup> + ''d''(''x'' − ''x''<sub>0</sub>)<sup>3</sup>}}, and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients {{math|''a''}}, {{math|''b''}}, {{math|''c''}}, and {{math|''d''}} that makes the approximation as good as possible.
 
In the [[Neighbourhood (mathematics)|neighbourhood]] of {{math|''x''<sub>0</sub>}}, for {{math|''a''}} the best possible choice is always {{math|''f''(''x''<sub>0</sub>)}}, and for {{math|''b''}} the best possible choice is always {{math|''f<nowiki>'</nowiki>''(''x''<sub>0</sub>)}}. For {{math|''c''}}, {{math|''d''}}, and higher-degree coefficients, these coefficients are determined by higher derivatives of {{math|''f''}}. {{math|''c''}} should always be {{math|{{sfrac|''f<nowiki>''</nowiki>''(''x''<sub>0</sub>)|2}}}}, and {{math|''d''}} should always be {{math|{{sfrac|''f<nowiki>'''</nowiki>''(''x''<sub>0</sub>)|3!}}}}. Using these coefficients gives the '''Taylor polynomial''' of {{math|''f''}}. The Taylor polynomial of degree {{math|''d''}} is the polynomial of degree {{math|''d''}} which best approximates {{math|''f''}}, and its coefficients can be found by a generalization of the above formulas. [[Taylor's theorem]] gives a precise bound on how good the approximation is. If {{math|''f''}} is a polynomial of degree less than or equal to {{math|''d''}}, then the Taylor polynomial of degree {{math|''d''}} equals {{math|''f''}}.
 
The limit of the Taylor polynomials is an infinite series called the '''Taylor series'''. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called [[analytic function]]s. It is impossible for functions with discontinuities or sharp corners to be analytic, but there are [[smooth function]]s which are not analytic.
 
=== 陰函数定理 ===
{{Main|Implicit function theorem}}
 
Some natural geometric shapes, such as [[circle]]s, cannot be drawn as the [[graph of a function]]. For instance, if {{math|''f''(''x'', ''y'') {{=}} ''x''<sup>2</sup> + ''y''<sup>2</sup> − 1}}, then the circle is the set of all pairs {{math|(''x'', ''y'')}} such that {{math|''f''(''x'', ''y'') {{=}} 0}}. This set is called the zero set of {{math|''f''}}. It is not the same as the graph of {{math|''f''}}, which is a [[cone (geometry)|cone]]. The implicit function theorem converts relations such as {{math|''f''(''x'', ''y'') {{=}} 0}} into functions. It states that if {{math|''f''}} is [[continuously differentiable]], then around most points, the zero set of {{math|''f''}} looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of {{math|''f''}}. The circle, for instance, can be pasted together from the graphs of the two functions {{math|± {{sqrt|1 - ''x''<sup>2</sup>}}}}. In a neighborhood of every point on the circle except {{nobreak|(−1, 0)}} and {{nobreak|(1, 0)}}, one of these two functions has a graph that looks like the circle. (These two functions also happen to meet {{nobreak|(−1, 0)}} and {{nobreak|(1, 0)}}, but this is not guaranteed by the implicit function theorem.)
 
The implicit function theorem is closely related to the [[inverse function theorem]], which states when a function looks like graphs of [[invertible function]]s pasted together.
-->
== 参考文献 ==
{{Reflist}}
 
*{{cite book | author=J. Edwards | title=Differential Calculus
| publisher= MacMillan and Co.| location=London | year=1892
|url=http://books.google.com/books?id=unltAAAAMAAJ&pg=PA1#v=onepage&q&f=false}}
 
{{Authority control}}
{{DEFAULTSORT:ひふんほう}}
[[Category:微分法|*]]
[[Category:数学に関する記事]]