BCS-054 (Computer Oriented Numerical Techniques)

     BCS-054 Computer oriented Numerical techniques (Assignments Solution)

    Course Code                        :            BCS-054

    Course Title                         :            Computer oriented Numerical techniques 

    Assignment Number            :           BCA(V)054/Assignment/2024-25 

    Maximum Marks                 :             100

    Weightage                            :            25% 

    Last Date of Submission      :           31stOctober,2024(For July, Session) 

                                                               30thApril, 2025(For January, Session)


(a) Explain each of the following concepts, along with at least one suitable example for each: (i) Fixed-point number representation (ii) round-off error (iii) representation of zero as floating point number (iv) significant digits in a decimal number representation (v) normalized representation of a floating point number (vi) overflow 

Ans.

(i) Fixed-point number representation

Fixed-point representation is a way to store numbers where the decimal (or binary) point is fixed at a specific position. In this system, numbers are represented with a fixed number of digits before and after the decimal point, which makes it suitable for representing integers or fractions where the precision is predetermined.

  • Example: Consider a 4-digit fixed-point decimal system with 2 digits reserved for fractional values. In this case, the number 123.45 would be stored as 12345, and the system would interpret it as 123.45. Similarly, the number 5.67 would be stored as 567, interpreted as 5.67.

(ii) Round-off error

Round-off error occurs when a number cannot be represented exactly in a given number format, so it is approximated. This is common in floating-point arithmetic, where some numbers (e.g., irrational numbers or certain decimals) cannot be represented with full precision.

  • Example: When representing the decimal number 0.1 in binary floating-point, it becomes a repeating binary fraction (0.0001100110011...). Since the computer cannot store this infinite sequence, it approximates it to a fixed number of bits, resulting in a round-off error. If you perform calculations based on this approximation, the results may not be exactly correct.

(iii) Representation of zero as floating point number

In floating-point representation (e.g., IEEE 754), zero can be represented as both positive zero (+0) and negative zero (-0). These two values are numerically equal but differ in their sign bit, and they can behave differently in certain computations (e.g., in division or underflow scenarios).

  • Example: In IEEE 754 single-precision format, positive zero is represented by a sign bit of 0, with both exponent and fraction set to 0. Negative zero is represented with the sign bit as 1, and exponent and fraction as 0.

    • Positive zero: +0 → 0 00000000 00000000000000000000000
    • Negative zero: -0 → 1 00000000 00000000000000000000000

(iv) Significant digits in a decimal number representation

Significant digits are the meaningful digits in a number, starting from the first non-zero digit. They convey the precision of the number, which is important in scientific and computational contexts.

  • Example: In the number 0.00567, the significant digits are 5, 6, and 7. Therefore, this number has 3 significant digits. Similarly, the number 123.400 has 6 significant digits because the trailing zeros after the decimal are considered significant.

(v) Normalized representation of a floating point number

In floating-point arithmetic, a normalized number is one in which the leading digit of the significand (mantissa) is non-zero. This ensures that the representation of numbers is unique and maximizes precision for a given number of bits.

  • Example: In a binary floating-point system, the number 1.23 × 10^3 would be stored as a normalized floating-point number as 1.23 (mantissa) and 3 (exponent). The leading 1 ensures that the representation is normalized. In binary, a normalized number typically has a significand that starts with a 1, e.g., 1.101 × 2^5.

(vi) Overflow

Overflow occurs when a calculation exceeds the maximum limit that can be stored in a given data type or number format. This typically results in incorrect results or system errors, as the value wraps around or saturates.

  • Example: In an 8-bit signed integer representation, the range of representable numbers is from -128 to 127. If you try to add two numbers, such as 120 + 10 = 130, it results in an overflow since 130 is outside the allowable range. The result may "wrap around" and produce an incorrect negative number, depending on the system.

    For example, 120 + 10 might result in -126 due to overflow in an 8-bit

 Q2 - Explain with suitable example that in computer arithmatics ( i.e., numbers represented in computer, with +, −, *, / as implemented in a computer) the multiplication operation( *) may not be distributive over plus , i.e. may not be true for some computer numbers a, b and c
Ans. In computer arithmetic, especially when using floating-point numbers, the distributive property of multiplication over addition may not hold due to precision limitations and round-off errors. The distributive property states that:
a×(b+c)=(a×b)+(a×c)a \times (b + c) = (a \times b) + (a \times c)

However, because floating-point numbers have limited precision, the results of arithmetic operations can be subject to small errors. These errors can cause the distributive property to fail.

Explanation with Example

Consider the following floating-point numbers (which are stored with limited precision):

  • a=1a = 1
  • b=1×1010b = 1 \times 10^{10} (a very large number)
  • c=1×1010+1c = -1 \times 10^{10} + 1 (a slightly smaller number)

Now, let's test the distributive property with these numbers:

  1. Left side of the distributive property:

    a×(b+c)=1×(1×1010+(1×1010+1))=1×1=1a \times (b + c) = 1 \times (1 \times 10^{10} + (-1 \times 10^{10} + 1)) = 1 \times 1 = 1

    This is correct because b+c=1b + c = 1 when calculated exactly.

  2. Right side of the distributive property:

    (a×b)+(a×c)=(1×1×1010)+(1×(1×1010+1))(a \times b) + (a \times c) = (1 \times 1 \times 10^{10}) + (1 \times (-1 \times 10^{10} + 1))

    When this is calculated in floating-point arithmetic:

    (1×1×1010)=1×1010,(1×(1×1010+1))=1×1010(1 \times 1 \times 10^{10}) = 1 \times 10^{10}, \quad (1 \times (-1 \times 10^{10} + 1)) = -1 \times 10^{10}

    Adding these two results:

    1×1010+(1×1010)=01 \times 10^{10} + (-1 \times 10^{10}) = 0

    Notice that the +1+1 in cc is too small to make a difference due to the limited precision of floating-point arithmetic. As a result, the right-hand side evaluates to 00, not 11.

(c) Find out to how many decimal places the value 22/ 7 is accurate as an approximation of 3.14159265, where the latter is value of ╥, calculated up to 8 places after decimal ? 
Ans. To determine how many decimal places the fraction 

227\frac{22}{7} is accurate as an approximation of π\pi, calculated to 3.141592653.14159265, we can follow these steps:

  1. Calculate 227\frac{22}{7} to decimal places:

    227=3.142857142857...\frac{22}{7} = 3.142857142857...
  2. Compare 227\frac{22}{7} with π\pi, calculated to 8 decimal places:

    π=3.14159265\pi = 3.14159265

Now, let's compare the two values digit by digit up to the 8th decimal place.

  • 227=3.142857...\frac{22}{7} = 3.142857...
  • π=3.14159265\pi = 3.14159265

Digit Comparison:

  • First decimal place: 11 (matches)
  • Second decimal place: 44 (matches)
  • Third decimal place: 11 vs. 22 (mismatch)
(d) Calculate a bound for the truncation error in approximating f(x) = sin x by sin (x) = x − x 3 / (fact 3) + x5 / (fact 5), where −1 =< x =< 1 and (fact n) denotes factorial of n 
Ans. To calculate a bound for the truncation error in approximating 

sin(x)\sin(x) by the truncated Taylor series:

sin(x)xx33!+x55!,\sin(x) \approx x - \frac{x^3}{3!} + \frac{x^5}{5!},

we need to analyze the error term in the Taylor series expansion of sin(x)\sin(x). The general Taylor series expansion of sin(x)\sin(x) around x=0x = 0 is:

sin(x)=xx33!+x55!x77!+\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \dots

Truncation Error:

The truncation error En(x)E_n(x) when approximating a function by a Taylor series up to the term xn/n!x^n/n! is given by the next term in the series. For the approximation up to x5/5!x^5/5!, the next term in the series is:

E(x)=x77!.E(x) = \frac{x^7}{7!}.

Bound for the Truncation Error:

To find an upper bound for the truncation error on the interval 1x1-1 \leq x \leq 1, we evaluate the error term x77!\frac{x^7}{7!} for the maximum possible value of x|x| in the given interval. Since x1|x| \leq 1, the maximum value of x7|x^7| is when x=1|x| = 1.

E(x)177!=15040.E(x) \leq \frac{1^7}{7!} = \frac{1}{5040}.
To approximate 

(3.7)1(3.7)^{-1} using the first three terms of Taylor's series expansion, we can start by expanding around a nearby point where we know the value of the function f(x)=x1f(x) = x^{-1} easily.

Let's expand around x=4x = 4, since 14=0.25\frac{1}{4} = 0.25 is simple to calculate. Define:

f(x)=1xf(x) = \frac{1}{x}

The Taylor series expansion of f(x)f(x) around x=ax = a is given by:

f(x)=f(a)+f(a)(xa)+f(a)2!(xa)2+f(x) = f(a) + f'(a)(x - a) + \frac{f''(a)}{2!}(x - a)^2 + \dots

In this case, we'll expand around a=4a = 4 and approximate f(3.7)f(3.7).

Step 1: Compute the derivatives of f(x)f(x)

  1. f(x)=1xf(x) = \frac{1}{x}
  2. f(x)=1x2f'(x) = -\frac{1}{x^2}
  3. f(x)=2x3f''(x) = \frac{2}{x^3}

Step 2: Evaluate the derivatives at x=4x = 4

  1. f(4)=14=0.25f(4) = \frac{1}{4} = 0.25
  2. f(4)=142=116=0.0625f'(4) = -\frac{1}{4^2} = -\frac{1}{16} = -0.0625
  3. f(4)=243=264=0.03125f''(4) = \frac{2}{4^3} = \frac{2}{64} = 0.03125

Step 3: Write the Taylor expansion around x=4x = 4

The Taylor expansion is:

f(3.7)f(4)+f(4)(3.74)+f(4)2!(3.74)2f(3.7) \approx f(4) + f'(4)(3.7 - 4) + \frac{f''(4)}{2!}(3.7 - 4)^2

Substitute the values:

f(3.7)0.25+(0.0625)(3.74)+0.031252(3.74)2f(3.7) \approx 0.25 + (-0.0625)(3.7 - 4) + \frac{0.03125}{2}(3.7 - 4)^2

Step 4: Calculate the values

  1. 3.74=0.33.7 - 4 = -0.3
  2. First term: 0.250.25
  3. Second term: (0.0625)(0.3)=0.01875(-0.0625)(-0.3) = 0.01875
  4. Third term: 0.031252(0.3)2=0.031252×0.09=0.00140625\frac{0.03125}{2}(-0.3)^2 = \frac{0.03125}{2} \times 0.09 = 0.00140625

Step 5: Add the terms

Now sum the three terms:

f(3.7)0.25+0.01875+0.00140625=0.27015625f(3.7) \approx 0.25 + 0.01875 + 0.00140625 = 0.27015625
We are given the system of equations:
4x1+x2+2x3=162x1+5x2+3x3=193x1+2x2x3=12\begin{aligned} 4x_1 + x_2 + 2x_3 &= 16 \\ 2x_1 + 5x_2 + 3x_3 &= 19 \\ 3x_1 + 2x_2 - x_3 &= 12 \end{aligned}

Let's solve this system using Gaussian elimination or matrix methods.

Step 1: Write the system in augmented matrix form

The system of equations can be represented as the following augmented matrix:

(412162531932112)\begin{pmatrix} 4 & 1 & 2 & | & 16 \\ 2 & 5 & 3 & | & 19 \\ 3 & 2 & -1 & | & 12 \end{pmatrix}

Step 2: Perform Gaussian elimination

(i) Make the first entry in the first column a leading 1

To make the first row's first element 1, we can divide the first row by 4:

(1141242531932112)\begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{2} & | & 4 \\ 2 & 5 & 3 & | & 19 \\ 3 & 2 & -1 & | & 12 \end{pmatrix}

(ii) Eliminate the first column elements below the first pivot

We want to eliminate the elements below the first pivot (1). Subtract 2 times the first row from the second row and subtract 3 times the first row from the third row:

  • Row 2: R2R22×R1R_2 \rightarrow R_2 - 2 \times R_1
  • Row 3: R3R33×R1R_3 \rightarrow R_3 - 3 \times R_1
(1141240192211054720)\begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{2} & | & 4 \\ 0 & \frac{19}{2} & 2 & | & 11 \\ 0 & \frac{5}{4} & -\frac{7}{2} & | & 0 \end{pmatrix}

(iii) Make the second entry in the second column a leading 1

To make the second row's second element a leading 1, multiply the second row by 219\frac{2}{19}:

(114124014192219054720)\begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{2} & | & 4 \\ 0 & 1 & \frac{4}{19} & | & \frac{22}{19} \\ 0 & \frac{5}{4} & -\frac{7}{2} & | & 0 \end{pmatrix}

(iv) Eliminate the second column elements below and above the pivot

  • Eliminate the element above the pivot by subtracting 14\frac{1}{4} times the second row from the first row: R1R114R2R_1 \rightarrow R_1 - \frac{1}{4} R_2
  • Eliminate the element below the pivot by subtracting 54\frac{5}{4} times the second row from the third row: R3R354R2R_3 \rightarrow R_3 - \frac{5}{4} R_2
(1051954190141922190051385519)\begin{pmatrix} 1 & 0 & \frac{5}{19} & | & \frac{54}{19} \\ 0 & 1 & \frac{4}{19} & | & \frac{22}{19} \\ 0 & 0 & -\frac{51}{38} & | & -\frac{55}{19} \end{pmatrix}

(v) Make the third entry in the third column a leading 1

Multiply the third row by 3851-\frac{38}{51} to make the third element a leading 1:

(10519541901419221900111051)\begin{pmatrix} 1 & 0 & \frac{5}{19} & | & \frac{54}{19} \\ 0 & 1 & \frac{4}{19} & | & \frac{22}{19} \\ 0 & 0 & 1 & | & \frac{110}{51} \end{pmatrix}

(vi) Eliminate the third column elements above the pivot

  • Eliminate the element above the pivot in the second row by subtracting 419\frac{4}{19} times the third row from the second row.
  • Eliminate the element above the pivot in the first row by subtracting 519\frac{5}{19} times the third row from the first row.

After performing these operations:

(1003010200111051)\begin{pmatrix} 1 & 0 & 0 & | & 3 \\ 0 & 1 & 0 & | & 2 \\ 0 & 0 & 1 & | & \frac{110}{51} \end{pmatrix}

Step 3: Extract the solution

From the final matrix, we can read off the values of x1x_1, x2x_2, and x3x_3:

x1=3,x2=2,x3=110512.157x_1 = 3, \quad x_2 = 2, \quad x_3 = \frac{110}{51} \approx 2.157
 Perform four iterations (rounded to four decimal places) using (i) Jacobi Method and (ii) Gauss-Seidel method , for the following system of equations. 9 5 –5 –1 x1 –8 1 –4 1 x2 = –4 –2 1 –6 x3 – 18 With (0) x = (0, 0, 0)T . The exact solution is (1, 2, 3)T. Which method gives better approximation to the exact solution?

1. Regula Falsi Method (False Position Method)

The Regula Falsi method is a root-finding method that combines the advantages of the bisection method and linear interpolation.

Step 1: Define the function

f(x)=x2cos(x)+sin(x)f(x) = x^2 \cos(x) + \sin(x)

Step 2: Choose two initial guesses x0x_0 and x1x_1 such that f(x0)f(x_0) and f(x1)f(x_1) have opposite signs (i.e., f(x0)×f(x1)<0f(x_0) \times f(x_1) < 0).

Let’s start by testing values near zero:

f(0)=02cos(0)+sin(0)=0f(0) = 0^2 \cos(0) + \sin(0) = 0 f(1)=12cos(1)+sin(1)=0.5403+0.8415=1.3818f(1) = 1^2 \cos(1) + \sin(1) = 0.5403 + 0.8415 = 1.3818 f(1)=(1)2cos(1)+sin(1)=1×0.54030.8415=0.3012f(-1) = (-1)^2 \cos(-1) + \sin(-1) = 1 \times 0.5403 - 0.8415 = -0.3012

Since f(1)×f(1)<0f(-1) \times f(1) < 0, we can use the interval [1,1][-1, 1].

Step 3: Apply the Regula Falsi formula to compute the next approximation:

The formula for the Regula Falsi method is:

xnew=x1f(x1)(x1x0)f(x1)f(x0)x_{\text{new}} = x_1 - \frac{f(x_1)(x_1 - x_0)}{f(x_1) - f(x_0)}

Let’s perform the iterations, rounding the results to three significant digits:

  • Iteration 1:
xnew=1f(1)×(1(1))f(1)f(1)=11.3818×21.3818(0.3012)=12.76361.6830=0.6423x_{\text{new}} = 1 - \frac{f(1) \times (1 - (-1))}{f(1) - f(-1)} = 1 - \frac{1.3818 \times 2}{1.3818 - (-0.3012)} = 1 - \frac{2.7636}{1.6830} = -0.6423
  • Update the interval to [1,0.6423][-1, -0.6423] because f(0.6423)f(-0.6423) has the same sign as f(1)f(1).

Repeat this process until convergence.


2. Newton-Raphson Method

Newton-Raphson is an iterative method where the next approximation is given by:

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

Step 1: Define the function and its derivative:

f(x)=x2cos(x)+sin(x)f(x) = x^2 \cos(x) + \sin(x) f(x)=2xcos(x)x2sin(x)+cos(x)f'(x) = 2x \cos(x) - x^2 \sin(x) + \cos(x)

Step 2: Choose an initial guess:

Start with x0=0.5x_0 = 0.5.

Step 3: Perform iterations:

  • Iteration 1:
x1=x0f(x0)f(x0)=0.50.52cos(0.5)+sin(0.5)2(0.5)cos(0.5)0.52sin(0.5)+cos(0.5)x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 0.5 - \frac{0.5^2 \cos(0.5) + \sin(0.5)}{2(0.5)\cos(0.5) - 0.5^2 \sin(0.5) + \cos(0.5)}

Simplifying and continuing the iterations until xnx_n converges to a value with three significant digits.


3. Bisection Method

The bisection method repeatedly halves an interval containing the root.

Step 1: Choose two initial guesses:

From earlier, we can use x0=1x_0 = -1 and x1=1x_1 = 1.

Step 2: Compute the midpoint:

xmid=x0+x12x_{\text{mid}} = \frac{x_0 + x_1}{2}

Step 3: Update the interval:

Check the sign of f(xmid)f(x_{\text{mid}}) and update the interval accordingly.

Repeat this process until convergence to three significant digits.


4. Secant Method

The secant method is similar to Newton-Raphson but doesn't require the derivative.

Step 1: Choose two initial guesses:

We can use x0=1x_0 = 1 and x1=0.5x_1 = 0.5.

Step 2: Apply the secant method formula:

xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}

Step 3: Perform iterations:

Use the formula to iterate until the values converge to three significant digits.

Question 3. (a) Determine the smallest roots of the following equation: f(x) = x2 cos (x) + sin (x) =0 to three significant digits using (i) Regula-falsi method (ii) Newton Raphson method (iii) Bisection method (iv) Secant method 

Ans.

1. Regula Falsi Method (False Position Method)

The Regula Falsi method is a root-finding method that combines the advantages of the bisection method and linear interpolation.

Step 1: Define the function

f(x)=x2cos(x)+sin(x)f(x) = x^2 \cos(x) + \sin(x)

Step 2: Choose two initial guesses x0x_0 and x1x_1 such that f(x0)f(x_0) and f(x1)f(x_1) have opposite signs (i.e., f(x0)×f(x1)<0f(x_0) \times f(x_1) < 0).

Let’s start by testing values near zero:

f(0)=02cos(0)+sin(0)=0f(0) = 0^2 \cos(0) + \sin(0) = 0 f(1)=12cos(1)+sin(1)=0.5403+0.8415=1.3818f(1) = 1^2 \cos(1) + \sin(1) = 0.5403 + 0.8415 = 1.3818 f(1)=(1)2cos(1)+sin(1)=1×0.54030.8415=0.3012f(-1) = (-1)^2 \cos(-1) + \sin(-1) = 1 \times 0.5403 - 0.8415 = -0.3012

Since f(1)×f(1)<0f(-1) \times f(1) < 0, we can use the interval [1,1][-1, 1].

Step 3: Apply the Regula Falsi formula to compute the next approximation:

The formula for the Regula Falsi method is:

xnew=x1f(x1)(x1x0)f(x1)f(x0)x_{\text{new}} = x_1 - \frac{f(x_1)(x_1 - x_0)}{f(x_1) - f(x_0)}

Let’s perform the iterations, rounding the results to three significant digits:

  • Iteration 1:

xnew=1f(1)×(1(1))f(1)f(1)=11.3818×21.3818(0.3012)=12.76361.6830=0.6423x_{\text{new}} = 1 - \frac{f(1) \times (1 - (-1))}{f(1) - f(-1)} = 1 - \frac{1.3818 \times 2}{1.3818 - (-0.3012)} = 1 - \frac{2.7636}{1.6830} = -0.6423

  • Update the interval to [1,0.6423][-1, -0.6423] because f(0.6423)f(-0.6423) has the same sign as f(1)f(1).

Repeat this process until convergence.


2. Newton-Raphson Method

Newton-Raphson is an iterative method where the next approximation is given by:

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

Step 1: Define the function and its derivative:

f(x)=x2cos(x)+sin(x)f(x) = x^2 \cos(x) + \sin(x) f(x)=2xcos(x)x2sin(x)+cos(x)f'(x) = 2x \cos(x) - x^2 \sin(x) + \cos(x)

Step 2: Choose an initial guess:

Start with x0=0.5x_0 = 0.5.

Step 3: Perform iterations:

  • Iteration 1:

x1=x0f(x0)f(x0)=0.50.52cos(0.5)+sin(0.5)2(0.5)cos(0.5)0.52sin(0.5)+cos(0.5)x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 0.5 - \frac{0.5^2 \cos(0.5) + \sin(0.5)}{2(0.5)\cos(0.5) - 0.5^2 \sin(0.5) + \cos(0.5)}

Simplifying and continuing the iterations until xnx_n converges to a value with three significant digits.


3. Bisection Method

The bisection method repeatedly halves an interval containing the root.

Step 1: Choose two initial guesses:

From earlier, we can use x0=1x_0 = -1 and x1=1x_1 = 1.

Step 2: Compute the midpoint:

xmid=x0+x12x_{\text{mid}} = \frac{x_0 + x_1}{2}

Step 3: Update the interval:

Check the sign of f(xmid)f(x_{\text{mid}}) and update the interval accordingly.

Repeat this process until convergence to three significant digits.


4. Secant Method

The secant method is similar to Newton-Raphson but doesn't require the derivative.

Step 1: Choose two initial guesses:

We can use x0=1x_0 = 1 and x1=0.5x_1 = 0.5.

Step 2: Apply the secant method formula:

xn+1=xnf(xn)(xnxn1)f(xn)f(xn1)x_{n+1} = x_n - \frac{f(x_n)(x_n - x_{n-1})}{f(x_n) - f(x_{n-1})}

Step 3: Perform iterations:

Use the formula to iterate until the values converge to three significant digits.

Question 4. (a) Explain what is the role of interpolation in solving numerical problems?

Ans. 

1. Estimating Intermediate Values

In many scientific and engineering problems, data is often collected at specific intervals, but it may be necessary to estimate values at points between the collected data. Interpolation provides a way to approximate these unknown values using mathematical functions. For example:

  • In weather forecasting, temperature or pressure may be known at specific times of the day, but we may need to estimate the values at times in between.
  • In finance, if stock prices are known at specific times, interpolation can estimate prices between those times.

Example: If we know f(1)=2f(1) = 2 and f(2)=3f(2) = 3, interpolation can help estimate f(1.5)f(1.5) using a method such as linear interpolation or polynomial interpolation.

2. Constructing Continuous Functions from Discrete Data

When data is discrete, it often doesn't provide a continuous picture of the system being analyzed. Interpolation creates a smooth function that approximates the behavior of the system over a continuous range. This is useful for:

  • Numerical integration: Interpolation helps create a smooth function to integrate when only discrete data points are available.
  • Numerical differentiation: When you have discrete points, interpolation can be used to estimate derivatives, which are defined only for continuous functions.

3. Reducing Errors in Numerical Methods

Interpolation helps in minimizing errors in numerical methods. For example:

  • In root-finding methods (such as the Regula Falsi method or Secant method), interpolation is used to approximate where the function crosses the x-axis based on known values of the function at two points.
  • In curve fitting, interpolation is used to estimate intermediate values to fit a curve that passes through or near the known data points.

4. Smooth Data Representation

In applications such as image processing, audio signal processing, and geographic information systems (GIS), interpolation provides a way to smooth data and fill in gaps:

  • Image Processing: Interpolation techniques like bilinear or bicubic interpolation are used to resize images or enhance image quality.
  • Audio Processing: In signal processing, interpolation is used to reconstruct missing samples in audio data.

5. Handling Nonlinear Systems

In nonlinear systems where analytical solutions might not be possible, interpolation provides an approximate solution. For instance, polynomial interpolation can be used to approximate solutions to differential equations by constructing polynomials that fit known points and estimating solutions between them.

6. Interpolation Methods

There are different types of interpolation techniques depending on the nature of the problem:

  • Linear Interpolation: Used when the data points form a straight line or are approximately linear between two known points.
  • Polynomial Interpolation: Uses higher-degree polynomials to approximate more complex curves between data points.
  • Spline Interpolation: Uses piecewise polynomials to provide smoother approximations, especially useful in cases where the data points are non-linear and smoothness is required.
(b) Express Δ3 f1 as a backward difference
Ans.  To express 

Δ3f1\Delta_3 f_1 as a backward difference, we will use the concept of the third finite difference (i.e., applying the backward difference operator three times). Let's break it down step by step:

1. First Backward Difference:

The first backward difference f1\nabla f_1 is defined as:

f1=f1f0\nabla f_1 = f_1 - f_0

2. Second Backward Difference:

The second backward difference 2f1\nabla^2 f_1 is the backward difference of the first backward difference:

2f1=f1f0\nabla^2 f_1 = \nabla f_1 - \nabla f_0

Substitute the first backward difference:

2f1=(f1f0)(f0f1)=f12f0+f1\nabla^2 f_1 = (f_1 - f_0) - (f_0 - f_{-1}) = f_1 - 2f_0 + f_{-1}

3. Third Backward Difference:

The third backward difference 3f1\nabla^3 f_1 is the backward difference of the second backward difference:

3f1=2f12f0\nabla^3 f_1 = \nabla^2 f_1 - \nabla^2 f_0

Substitute the second backward difference:

3f1=(f12f0+f1)(f02f1+f2)\nabla^3 f_1 = (f_1 - 2f_0 + f_{-1}) - (f_0 - 2f_{-1} + f_{-2})

Simplify:

3f1=f13f0+3f1f2\nabla^3 f_1 = f_1 - 3f_0 + 3f_{-1} - f_{-2}

Conclusion:

The third backward difference 3f1\nabla^3 f_1 is expressed as:

Δ3f1=f13f0+3f1f2\Delta_3 f_1 = f_1 - 3f_0 + 3f_{-1} - f_{-2}

This expression shows the third-order backward difference in terms of the values of the function at points f1,f0,f1,f_1, f_0, f_{-1}, and f2f_{-2}.

(c) Express Δ3 f1 as a central difference.

Ans.  To express 

Δ3f1\Delta_3 f_1 as a central difference, we need to use the central difference operator. The central difference method takes into account values both ahead and behind the point of interest, rather than just the values before (backward difference) or after (forward difference).

The central difference operator δ\delta is defined as:

δfn=fn+1fn12\delta f_n = \frac{f_{n+1} - f_{n-1}}{2}

This gives the first-order central difference. For higher-order central differences, we repeatedly apply the operator.

Let’s go through the steps to find the third-order central difference Δ3f1\Delta_3 f_1:

1. First Central Difference:

The first central difference δf1\delta f_1 is given by:

δf1=f2f02\delta f_1 = \frac{f_2 - f_0}{2}

2. Second Central Difference:

The second central difference δ2f1\delta^2 f_1 is the central difference of the first central difference:

δ2f1=δf2δf02\delta^2 f_1 = \frac{\delta f_2 - \delta f_0}{2}

Substitute the first central difference:

δ2f1=(f3f12)(f1f12)2\delta^2 f_1 = \frac{\left(\frac{f_3 - f_1}{2}\right) - \left(\frac{f_1 - f_{-1}}{2}\right)}{2}

Simplify:

δ2f1=f32f1+f14\delta^2 f_1 = \frac{f_3 - 2f_1 + f_{-1}}{4}

3. Third Central Difference:

The third central difference δ3f1\delta^3 f_1 is the central difference of the second central difference:

δ3f1=δ2f2δ2f02\delta^3 f_1 = \frac{\delta^2 f_2 - \delta^2 f_0}{2}

Substitute the second central difference:

δ3f1=(f42f2+f04)(f22f0+f24)2\delta^3 f_1 = \frac{\left(\frac{f_4 - 2f_2 + f_0}{4}\right) - \left(\frac{f_2 - 2f_0 + f_{-2}}{4}\right)}{2}

Simplify:

δ3f1=f43f2+3f0f28\delta^3 f_1 = \frac{f_4 - 3f_2 + 3f_0 - f_{-2}}{8}
The data points are:
iixix_iyiy_i
0-116.8575
1024.0625
2116.5650
32-13.9375
4328.5625
54144.0625

Step 1: Difference Table

The difference table is constructed by finding the first, second, third, and higher-order differences for the function values yiy_i. For the forward difference table, each difference is calculated as:

Δyi=yi+1yi\Delta y_i = y_{i+1} - y_i

We’ll proceed to calculate each difference step-by-step.

First-order Differences (Δ1yi\Delta^1 y_i):

Δ1y0=y1y0=24.062516.8575=7.2050\Delta^1 y_0 = y_1 - y_0 = 24.0625 - 16.8575 = 7.2050 Δ1y1=y2y1=16.565024.0625=7.4975\Delta^1 y_1 = y_2 - y_1 = 16.5650 - 24.0625 = -7.4975 Δ1y2=y3y2=13.937516.5650=30.5025\Delta^1 y_2 = y_3 - y_2 = -13.9375 - 16.5650 = -30.5025 Δ1y3=y4y3=28.5625(13.9375)=42.5000\Delta^1 y_3 = y_4 - y_3 = 28.5625 - (-13.9375) = 42.5000 Δ1y4=y5y4=144.062528.5625=115.5000\Delta^1 y_4 = y_5 - y_4 = 144.0625 - 28.5625 = 115.5000

Second-order Differences (Δ2yi\Delta^2 y_i):

Δ2y0=Δ1y1Δ1y0=7.49757.2050=14.7025\Delta^2 y_0 = \Delta^1 y_1 - \Delta^1 y_0 = -7.4975 - 7.2050 = -14.7025 Δ2y1=Δ1y2Δ1y1=30.5025(7.4975)=23.0050\Delta^2 y_1 = \Delta^1 y_2 - \Delta^1 y_1 = -30.5025 - (-7.4975) = -23.0050 Δ2y2=Δ1y3Δ1y2=42.5000(30.5025)=73.0025\Delta^2 y_2 = \Delta^1 y_3 - \Delta^1 y_2 = 42.5000 - (-30.5025) = 73.0025 Δ2y3=Δ1y4Δ1y3=115.500042.5000=73.0000\Delta^2 y_3 = \Delta^1 y_4 - \Delta^1 y_3 = 115.5000 - 42.5000 = 73.0000

Third-order Differences (Δ3yi\Delta^3 y_i):

Δ3y0=Δ2y1Δ2y0=23.0050(14.7025)=8.3025\Delta^3 y_0 = \Delta^2 y_1 - \Delta^2 y_0 = -23.0050 - (-14.7025) = -8.3025 Δ3y1=Δ2y2Δ2y1=73.0025(23.0050)=96.0075\Delta^3 y_1 = \Delta^2 y_2 - \Delta^2 y_1 = 73.0025 - (-23.0050) = 96.0075 Δ3y2=Δ2y3Δ2y2=73.000073.0025=0.0025\Delta^3 y_2 = \Delta^2 y_3 - \Delta^2 y_2 = 73.0000 - 73.0025 = -0.0025

Fourth-order Differences (Δ4yi\Delta^4 y_i):

Δ4y0=Δ3y1Δ3y0=96.0075(8.3025)=104.3100\Delta^4 y_0 = \Delta^3 y_1 - \Delta^3 y_0 = 96.0075 - (-8.3025) = 104.3100 Δ4y1=Δ3y2Δ3y1=0.002596.0075=96.0100\Delta^4 y_1 = \Delta^3 y_2 - \Delta^3 y_1 = -0.0025 - 96.0075 = -96.0100

Fifth-order Difference (Δ5yi\Delta^5 y_i):

Δ5y0=Δ4y1Δ4y0=96.0100104.3100=200.3200\Delta^5 y_0 = \Delta^4 y_1 - \Delta^4 y_0 = -96.0100 - 104.3100 = -200.3200

Difference Table

iixix_iyiy_iΔ1yi\Delta^1 y_iΔ2yi\Delta^2 y_iΔ3yi\Delta^3 y_iΔ4yi\Delta^4 y_iΔ5yi\Delta^5 y_i
0-116.85757.2050-14.7025-8.3025104.3100-200.3200
1024.0625-7.4975-23.005096.0075-96.0100
2116.5650-30.502573.0025-0.0025
32-13.937542.500073.0000
4328.5625115.5000
54144.0625

Step 2: Forward Differences

The forward differences for the given data are calculated from the top of the table, as shown in the table under the column Δ1yi,Δ2yi,Δ3yi,\Delta^1 y_i, \Delta^2 y_i, \Delta^3 y_i, etc.

Step 3: Backward Differences

For backward differences, we use the differences starting from the bottom of the table. Here's how you can compute the backward differences for the data:

First-order Backward Differences (1yi\nabla^1 y_i):

1y5=y5y4=144.062528.5625=115.5000\nabla^1 y_5 = y_5 - y_4 = 144.0625 - 28.5625 = 115.5000 1y4=y4y3=28.5625(13.9375)=42.5000\nabla^1 y_4 = y_4 - y_3 = 28.5625 - (-13.9375) = 42.5000 1y3=y3y2=13.937516.5650=30.5025\nabla^1 y_3 = y_3 - y_2 = -13.9375 - 16.5650 = -30.5025 1y2=y2y1=16.565024.0625=7.4975\nabla^1 y_2 = y_2 - y_1 = 16.5650 - 24.0625 = -7.4975 1y1=y1y0=24.062516.8575=7.2050\nabla^1 y_1 = y_1 - y_0 = 24.0625 - 16.8575 = 7.2050

Second-order Backward Differences (2yi\nabla^2 y_i):

2y5=1y51y4=115.500042.5000=73.0000\nabla^2 y_5 = \nabla^1 y_5 - \nabla^1 y_4 = 115.5000 - 42.5000 = 73.0000 2y4=1y41y3=42.5000(30.5025)=73.0025\nabla^2 y_4 = \nabla^1 y_4 - \nabla^1 y_3 = 42.5000 - (-30.5025) = 73.0025 2y3=1y31y2=30.5025(7.4975)=23.0050\nabla^2 y_3 = \nabla^1 y_3 - \nabla^1 y_2 = -30.5025 - (-7.4975) = -23.0050 2y2=1y21y1=7.49757.2050=14.7025\nabla^2 y_2 = \nabla^1 y_2 - \nabla^1 y_1 = -7.4975 - 7.2050 = -14.7025

Third-order Backward Differences (3yi\nabla^3 y_i):

3y5=2y52y4=73.000073.0025=0.0025\nabla^3 y_5 = \nabla^2 y_5 - \nabla^2 y_4 = 73.0000 - 73.0025 = -0.0025 3y4=2y42y3=73.0025(23.0050)=96.0075\nabla^3 y_4 = \nabla^2 y_4 - \nabla^2 y_3 = 73.0025 - (-23.0050) = 96.0075 3y3=2y32y2=23.0050(14.7025)=8.3025\nabla^3 y_3 = \nabla^2 y_3 - \nabla^2 y_2 = -23.0050 - (-14.7025) = -8.3025

Fourth-order Backward Differences (4yi\nabla^4 y_i):

4y5=3y53y4=0.002596.0075=96.0100\nabla^4 y_5 = \nabla^3 y_5 - \nabla^3 y_4 = -0.0025 - 96.0075 = -96.0100 4y4=3y43y3=96.0075(8.3025)\nabla^4 y_4 = \nabla^3 y_4 - \nabla^3 y_3 = 96.0075 - (-8.3025)

Question 6. (a) Find the values of the first and second derivatives of f(x) at x = 76 from the following table. Use 0(h2 ) forward difference method. Also, find Truncation Error (TE) and actual errors. 
Ans. 

Step 1: Set up the forward difference formulas

The forward difference method approximates derivatives by using differences between successive function values at regular intervals.

Formula for the first derivative using forward difference (O(h2)O(h^2)):

The forward difference formula for the first derivative at x0x_0 is:

f(x0)3f(x0)+4f(x1)f(x2)2hf'(x_0) \approx \frac{-3f(x_0) + 4f(x_1) - f(x_2)}{2h}

where hh is the step size, f(x0)f(x_0), f(x1)f(x_1), and f(x2)f(x_2) are function values at x0x_0, x1=x0+hx_1 = x_0 + h, and x2=x0+2hx_2 = x_0 + 2h.

Formula for the second derivative using forward difference (O(h2)O(h^2)):

The forward difference formula for the second derivative at x0x_0 is:

f(x0)f(x0)2f(x1)+f(x2)h2f''(x_0) \approx \frac{f(x_0) - 2f(x_1) + f(x_2)}{h^2}

Step 2: Data and step size

Let’s assume we have a table of values for f(x)f(x) near x=76x = 76. Here’s an example of a table:

xxf(x)f(x)
76f(76)f(76)
78f(78)f(78)
80f(80)f(80)

Assume the values of f(76)f(76), f(78)f(78), and f(80)f(80) are provided, and the step size h=2h = 2 (since x1=76+2=78x_1 = 76 + 2 = 78 and x2=76+4=80x_2 = 76 + 4 = 80).

Step 3: Calculate the first and second derivatives

First Derivative:

Using the first derivative formula:

f(76)3f(76)+4f(78)f(80)2hf'(76) \approx \frac{-3f(76) + 4f(78) - f(80)}{2h}

Substitute the values of f(76)f(76), f(78)f(78), and f(80)f(80) and h=2h = 2 to get the approximate first derivative.

Second Derivative:

Using the second derivative formula:

f(76)f(76)2f(78)+f(80)h2f''(76) \approx \frac{f(76) - 2f(78) + f(80)}{h^2}

Substitute the values of f(76)f(76), f(78)f(78), and f(80)f(80) and h=2h = 2 to get the approximate second derivative.

Step 4: Truncation Error (TE)

The truncation error for the forward difference method depends on the next higher-order terms ignored in the Taylor series expansion.

Truncation error for the first derivative:

For the first derivative, the truncation error is proportional to h2h^2:

TE1=h23f(3)(ξ)TE_1 = -\frac{h^2}{3} f^{(3)}(\xi)

where ξ\xi is some point in the interval [x0,x2][x_0, x_2] and f(3)(ξ)f^{(3)}(\xi) is the third derivative of the function at ξ\xi.

Truncation error for the second derivative:

For the second derivative, the truncation error is proportional to h2h^2:

TE2=h212f(4)(ξ)TE_2 = -\frac{h^2}{12} f^{(4)}(\xi)

where f(4)(ξ)f^{(4)}(\xi) is the fourth derivative of the function at ξ\xi.

Step 5: Actual Error

To compute the actual error, you would compare the forward difference approximation with the true values of the first and second derivatives (which may be given or calculated analytically from f(x)f(x)).

Actual Error=True ValueApproximate Value\text{Actual Error} = \left| \text{True Value} - \text{Approximate Value} \right|

If the true values of the derivatives are provided or known, this step can be carried out directly.

Example:

Let’s assume the following hypothetical values for the function at x=76x = 76, x=78x = 78, and x=80x = 80:

xxf(x)f(x)
7612.0
7815.5
8019.0

First derivative calculation:

f(76)3(12.0)+4(15.5)19.02(2)f'(76) \approx \frac{-3(12.0) + 4(15.5) - 19.0}{2(2)} f(76)36.0+62.019.04=7.04=1.75f'(76) \approx \frac{-36.0 + 62.0 - 19.0}{4} = \frac{7.0}{4} = 1.75

Second derivative calculation:

f(76)12.02(15.5)+19.0(2)2f''(76) \approx \frac{12.0 - 2(15.5) + 19.0}{(2)^2} f(76)12.031.0+19.04=0.04=0.0f''(76) \approx \frac{12.0 - 31.0 + 19.0}{4} = \frac{0.0}{4} = 0.0

  #CODEWITHCHIRAYU

                                                              Thank You 😊                                                            

Comments

Popular posts from this blog

IGNOU Term End December2024 Exam Result

Top 12 (Most Impotant Questions) BCS-051 : INTRODUCTION TO SOFTWARE

COMPANY SECRETARIES EXAMINATION – CS Examination Admit Cards