基于R语言的Newton-Raphson迭代法(针对二元可求导函数)

基于R语言的Newton-Raphson迭代法(针对二元可求导函数)

Newton-Rapson Method

Newton-Raphson方法是一种基于根的初始值猜测而来的迭代方法,此方法使用的函数为原函数以及原函数的导数,如果成功,它通常会快速的收敛,但是它也有可能像其他寻根方法一样失败,这是需要注意的一点。(因为牛顿方法并不总是趋同,其收敛理论作用于局部收敛)

算法思路

该方法试图以迭代方法求解f(x) = 0 :
xn+1=xnf(xn)f(xn) x_{n+1} = x_{n} - \frac{f{(x_n)}}{f'{(x_n)}}

几何解释: 基于R语言的Newton-Raphson迭代法(针对二元可求导函数)

R代码

你可以使用deriv()

#Write the function newton.rephson() , x0 is the initial x value, 
#fun is the f(x), precision is the estimated accuracy
newton.raphson <- function(x1, x2, fun, precision, max.loop)
{
  #Compute derivatives of simple expressions, symbolically and algorithmically
  dy <- deriv(fun, c("x1", "x2"), hessian = TRUE)
  x1 <- x1
  x2 <- x2
  
  #Build a vector to storage each iteration x value
  estimate_x1 <- vector(mode = "numeric", length = 0)
  estimate_x1[1] <- x1
  estimate_x2 <- vector(mode = "numeric", length = 0)
  estimate_x2[1] <- x2
  
  #Evaluate an R expression in a specified environment
  d <- eval(dy)
  i <- 1
  #Establish a loop until the estimated accuracy reaches the specified level
  while((abs(d) > precision) && (i < max.loop))
  {
    i <- i+1
    d <- eval(dy)
    hessian <- matrix(attributes(d)$hessian, byrow = F, 2, 2)
    x1 <- x1 - solve(hessian, t(attributes(d)$gradient))[1,]
    x2 <- x2 - solve(hessian, t(attributes(d)$gradient))[2,]
    estimate_x1[i] <- x1
    estimate_x2[i] <- x2
  }
  #Return each iteration x value, the last one is the number closest to the zero point of the f(x)
  return(list(Iteration_x1 = estimate_x1, Iteration_x2 = estimate_x2))
}

你也可以使用D()

newton.raphson <- function(f, init.value, df = NULL, tol = 1e-5, maxiter = 1000) {
  if (is.null(df)) {
    df <- D(f, 'x')
  }
  niter <- 0
  diff <- tol + 1
  x <- init.value
  while (diff >= tol && niter <= maxiter) {
    niter <- niter + 1
    fx <- eval(f)
    dfx <- eval(df)
    if (dfx == 0) {
      warning("Slope is zero: no further improvement possible.")
      break
    }
    diff <- -fx/dfx
    x <- x + diff
    diff <- abs(diff)
  }

注意一点,起始值的选择十分重要,错误的起始值可能会导致迭代方法失败,尽量让你的初始值接近你估计的根