BCS-054 (Computer Oriented Numerical Techniques)
BCS-054 Computer oriented Numerical techniques (Assignments Solution)
Course Code : BCS-054
Course Title : Computer oriented Numerical techniques
Assignment Number : BCA(V)054/Assignment/2024-25
Maximum Marks : 100
Weightage : 25%
Last Date of Submission : 31stOctober,2024(For July, Session)
30thApril, 2025(For January, Session)
(a) Explain each of the following concepts, along with at least one suitable example for each: (i) Fixed-point number representation (ii) round-off error (iii) representation of zero as floating point number (iv) significant digits in a decimal number representation (v) normalized representation of a floating point number (vi) overflow
Ans.
(i) Fixed-point number representation
Fixed-point representation is a way to store numbers where the decimal (or binary) point is fixed at a specific position. In this system, numbers are represented with a fixed number of digits before and after the decimal point, which makes it suitable for representing integers or fractions where the precision is predetermined.
- Example:
Consider a 4-digit fixed-point decimal system with 2 digits reserved for fractional values. In this case, the number 123.45 would be stored as
12345
, and the system would interpret it as 123.45. Similarly, the number 5.67 would be stored as567
, interpreted as 5.67.
(ii) Round-off error
Round-off error occurs when a number cannot be represented exactly in a given number format, so it is approximated. This is common in floating-point arithmetic, where some numbers (e.g., irrational numbers or certain decimals) cannot be represented with full precision.
- Example:
When representing the decimal number
0.1
in binary floating-point, it becomes a repeating binary fraction (0.0001100110011...
). Since the computer cannot store this infinite sequence, it approximates it to a fixed number of bits, resulting in a round-off error. If you perform calculations based on this approximation, the results may not be exactly correct.
(iii) Representation of zero as floating point number
In floating-point representation (e.g., IEEE 754), zero can be represented as both positive zero (+0) and negative zero (-0). These two values are numerically equal but differ in their sign bit, and they can behave differently in certain computations (e.g., in division or underflow scenarios).
Example: In IEEE 754 single-precision format, positive zero is represented by a sign bit of 0, with both exponent and fraction set to 0. Negative zero is represented with the sign bit as 1, and exponent and fraction as 0.
- Positive zero:
+0 → 0 00000000 00000000000000000000000
- Negative zero:
-0 → 1 00000000 00000000000000000000000
- Positive zero:
(iv) Significant digits in a decimal number representation
Significant digits are the meaningful digits in a number, starting from the first non-zero digit. They convey the precision of the number, which is important in scientific and computational contexts.
- Example:
In the number
0.00567
, the significant digits are5
,6
, and7
. Therefore, this number has 3 significant digits. Similarly, the number123.400
has 6 significant digits because the trailing zeros after the decimal are considered significant.
(v) Normalized representation of a floating point number
In floating-point arithmetic, a normalized number is one in which the leading digit of the significand (mantissa) is non-zero. This ensures that the representation of numbers is unique and maximizes precision for a given number of bits.
- Example:
In a binary floating-point system, the number
1.23 × 10^3
would be stored as a normalized floating-point number as1.23
(mantissa) and3
(exponent). The leading1
ensures that the representation is normalized. In binary, a normalized number typically has a significand that starts with a1
, e.g.,1.101 × 2^5
.
(vi) Overflow
Overflow occurs when a calculation exceeds the maximum limit that can be stored in a given data type or number format. This typically results in incorrect results or system errors, as the value wraps around or saturates.
Example: In an 8-bit signed integer representation, the range of representable numbers is from
-128
to127
. If you try to add two numbers, such as120 + 10 = 130
, it results in an overflow since130
is outside the allowable range. The result may "wrap around" and produce an incorrect negative number, depending on the system.For example,
120 + 10
might result in-126
due to overflow in an 8-bit
However, because floating-point numbers have limited precision, the results of arithmetic operations can be subject to small errors. These errors can cause the distributive property to fail.
Explanation with Example
Consider the following floating-point numbers (which are stored with limited precision):
- (a very large number)
- (a slightly smaller number)
Now, let's test the distributive property with these numbers:
Left side of the distributive property:
This is correct because when calculated exactly.
Right side of the distributive property:
When this is calculated in floating-point arithmetic:
Adding these two results:
Notice that the in is too small to make a difference due to the limited precision of floating-point arithmetic. As a result, the right-hand side evaluates to , not .
is accurate as an approximation of , calculated to , we can follow these steps:
Calculate to decimal places:
Compare with , calculated to 8 decimal places:
Now, let's compare the two values digit by digit up to the 8th decimal place.
Digit Comparison:
- First decimal place: (matches)
- Second decimal place: (matches)
- Third decimal place: vs. (mismatch)
by the truncated Taylor series:
we need to analyze the error term in the Taylor series expansion of . The general Taylor series expansion of around is:
Truncation Error:
The truncation error when approximating a function by a Taylor series up to the term is given by the next term in the series. For the approximation up to , the next term in the series is:
Bound for the Truncation Error:
To find an upper bound for the truncation error on the interval , we evaluate the error term for the maximum possible value of in the given interval. Since , the maximum value of is when .
using the first three terms of Taylor's series expansion, we can start by expanding around a nearby point where we know the value of the function easily.
Let's expand around , since is simple to calculate. Define:
The Taylor series expansion of around is given by:
In this case, we'll expand around and approximate .
Step 1: Compute the derivatives of
Step 2: Evaluate the derivatives at
Step 3: Write the Taylor expansion around
The Taylor expansion is:
Substitute the values:
Step 4: Calculate the values
- First term:
- Second term:
- Third term:
Step 5: Add the terms
Now sum the three terms:
Let's solve this system using Gaussian elimination or matrix methods.
Step 1: Write the system in augmented matrix form
The system of equations can be represented as the following augmented matrix:
Step 2: Perform Gaussian elimination
(i) Make the first entry in the first column a leading 1
To make the first row's first element 1, we can divide the first row by 4:
(ii) Eliminate the first column elements below the first pivot
We want to eliminate the elements below the first pivot (1). Subtract 2 times the first row from the second row and subtract 3 times the first row from the third row:
- Row 2:
- Row 3:
(iii) Make the second entry in the second column a leading 1
To make the second row's second element a leading 1, multiply the second row by :
(iv) Eliminate the second column elements below and above the pivot
- Eliminate the element above the pivot by subtracting times the second row from the first row:
- Eliminate the element below the pivot by subtracting times the second row from the third row:
(v) Make the third entry in the third column a leading 1
Multiply the third row by to make the third element a leading 1:
(vi) Eliminate the third column elements above the pivot
- Eliminate the element above the pivot in the second row by subtracting times the third row from the second row.
- Eliminate the element above the pivot in the first row by subtracting times the third row from the first row.
After performing these operations:
Step 3: Extract the solution
From the final matrix, we can read off the values of , , and :
1. Regula Falsi Method (False Position Method)
The Regula Falsi method is a root-finding method that combines the advantages of the bisection method and linear interpolation.
Step 1: Define the function
Step 2: Choose two initial guesses and such that and have opposite signs (i.e., ).
Let’s start by testing values near zero:
Since , we can use the interval .
Step 3: Apply the Regula Falsi formula to compute the next approximation:
The formula for the Regula Falsi method is:
Let’s perform the iterations, rounding the results to three significant digits:
- Iteration 1:
- Update the interval to because has the same sign as .
Repeat this process until convergence.
2. Newton-Raphson Method
Newton-Raphson is an iterative method where the next approximation is given by:
Step 1: Define the function and its derivative:
Step 2: Choose an initial guess:
Start with .
Step 3: Perform iterations:
- Iteration 1:
Simplifying and continuing the iterations until converges to a value with three significant digits.
3. Bisection Method
The bisection method repeatedly halves an interval containing the root.
Step 1: Choose two initial guesses:
From earlier, we can use and .
Step 2: Compute the midpoint:
Step 3: Update the interval:
Check the sign of and update the interval accordingly.
Repeat this process until convergence to three significant digits.
4. Secant Method
The secant method is similar to Newton-Raphson but doesn't require the derivative.
Step 1: Choose two initial guesses:
We can use and .
Step 2: Apply the secant method formula:
Step 3: Perform iterations:
Use the formula to iterate until the values converge to three significant digits.
Question 3. (a) Determine the smallest roots of the following equation: f(x) = x2 cos (x) + sin (x) =0 to three significant digits using (i) Regula-falsi method (ii) Newton Raphson method (iii) Bisection method (iv) Secant method
Ans.
1. Regula Falsi Method (False Position Method)
The Regula Falsi method is a root-finding method that combines the advantages of the bisection method and linear interpolation.
Step 1: Define the function
Step 2: Choose two initial guesses and such that and have opposite signs (i.e., ).
Let’s start by testing values near zero:
Since , we can use the interval .
Step 3: Apply the Regula Falsi formula to compute the next approximation:
The formula for the Regula Falsi method is:
Let’s perform the iterations, rounding the results to three significant digits:
- Iteration 1:
- Update the interval to because has the same sign as .
Repeat this process until convergence.
2. Newton-Raphson Method
Newton-Raphson is an iterative method where the next approximation is given by:
Step 1: Define the function and its derivative:
Step 2: Choose an initial guess:
Start with .
Step 3: Perform iterations:
- Iteration 1:
Simplifying and continuing the iterations until converges to a value with three significant digits.
3. Bisection Method
The bisection method repeatedly halves an interval containing the root.
Step 1: Choose two initial guesses:
From earlier, we can use and .
Step 2: Compute the midpoint:
Step 3: Update the interval:
Check the sign of and update the interval accordingly.
Repeat this process until convergence to three significant digits.
4. Secant Method
The secant method is similar to Newton-Raphson but doesn't require the derivative.
Step 1: Choose two initial guesses:
We can use and .
Step 2: Apply the secant method formula:
Step 3: Perform iterations:
Use the formula to iterate until the values converge to three significant digits.
Question 4. (a) Explain what is the role of interpolation in solving numerical problems?
Ans.
1. Estimating Intermediate Values
In many scientific and engineering problems, data is often collected at specific intervals, but it may be necessary to estimate values at points between the collected data. Interpolation provides a way to approximate these unknown values using mathematical functions. For example:
- In weather forecasting, temperature or pressure may be known at specific times of the day, but we may need to estimate the values at times in between.
- In finance, if stock prices are known at specific times, interpolation can estimate prices between those times.
Example: If we know and , interpolation can help estimate using a method such as linear interpolation or polynomial interpolation.
2. Constructing Continuous Functions from Discrete Data
When data is discrete, it often doesn't provide a continuous picture of the system being analyzed. Interpolation creates a smooth function that approximates the behavior of the system over a continuous range. This is useful for:
- Numerical integration: Interpolation helps create a smooth function to integrate when only discrete data points are available.
- Numerical differentiation: When you have discrete points, interpolation can be used to estimate derivatives, which are defined only for continuous functions.
3. Reducing Errors in Numerical Methods
Interpolation helps in minimizing errors in numerical methods. For example:
- In root-finding methods (such as the Regula Falsi method or Secant method), interpolation is used to approximate where the function crosses the x-axis based on known values of the function at two points.
- In curve fitting, interpolation is used to estimate intermediate values to fit a curve that passes through or near the known data points.
4. Smooth Data Representation
In applications such as image processing, audio signal processing, and geographic information systems (GIS), interpolation provides a way to smooth data and fill in gaps:
- Image Processing: Interpolation techniques like bilinear or bicubic interpolation are used to resize images or enhance image quality.
- Audio Processing: In signal processing, interpolation is used to reconstruct missing samples in audio data.
5. Handling Nonlinear Systems
In nonlinear systems where analytical solutions might not be possible, interpolation provides an approximate solution. For instance, polynomial interpolation can be used to approximate solutions to differential equations by constructing polynomials that fit known points and estimating solutions between them.
6. Interpolation Methods
There are different types of interpolation techniques depending on the nature of the problem:
- Linear Interpolation: Used when the data points form a straight line or are approximately linear between two known points.
- Polynomial Interpolation: Uses higher-degree polynomials to approximate more complex curves between data points.
- Spline Interpolation: Uses piecewise polynomials to provide smoother approximations, especially useful in cases where the data points are non-linear and smoothness is required.
as a backward difference, we will use the concept of the third finite difference (i.e., applying the backward difference operator three times). Let's break it down step by step:
1. First Backward Difference:
The first backward difference is defined as:
2. Second Backward Difference:
The second backward difference is the backward difference of the first backward difference:
Substitute the first backward difference:
3. Third Backward Difference:
The third backward difference is the backward difference of the second backward difference:
Substitute the second backward difference:
Simplify:
Conclusion:
The third backward difference is expressed as:
This expression shows the third-order backward difference in terms of the values of the function at points and .
(c) Express Δ3 f1 as a central difference.
Ans. To express
as a central difference, we need to use the central difference operator. The central difference method takes into account values both ahead and behind the point of interest, rather than just the values before (backward difference) or after (forward difference).
The central difference operator is defined as:
This gives the first-order central difference. For higher-order central differences, we repeatedly apply the operator.
Let’s go through the steps to find the third-order central difference :
1. First Central Difference:
The first central difference is given by:
2. Second Central Difference:
The second central difference is the central difference of the first central difference:
Substitute the first central difference:
Simplify:
3. Third Central Difference:
The third central difference is the central difference of the second central difference:
Substitute the second central difference:
Simplify:
0 | -1 | 16.8575 |
1 | 0 | 24.0625 |
2 | 1 | 16.5650 |
3 | 2 | -13.9375 |
4 | 3 | 28.5625 |
5 | 4 | 144.0625 |
Step 1: Difference Table
The difference table is constructed by finding the first, second, third, and higher-order differences for the function values . For the forward difference table, each difference is calculated as:
We’ll proceed to calculate each difference step-by-step.
First-order Differences ():
Second-order Differences ():
Third-order Differences ():
Fourth-order Differences ():
Fifth-order Difference ():
Difference Table
0 | -1 | 16.8575 | 7.2050 | -14.7025 | -8.3025 | 104.3100 | -200.3200 |
1 | 0 | 24.0625 | -7.4975 | -23.0050 | 96.0075 | -96.0100 | |
2 | 1 | 16.5650 | -30.5025 | 73.0025 | -0.0025 | ||
3 | 2 | -13.9375 | 42.5000 | 73.0000 | |||
4 | 3 | 28.5625 | 115.5000 | ||||
5 | 4 | 144.0625 |
Step 2: Forward Differences
The forward differences for the given data are calculated from the top of the table, as shown in the table under the column etc.
Step 3: Backward Differences
For backward differences, we use the differences starting from the bottom of the table. Here's how you can compute the backward differences for the data:
First-order Backward Differences ():
Second-order Backward Differences ():
Third-order Backward Differences ():
Fourth-order Backward Differences ():
Step 1: Set up the forward difference formulas
The forward difference method approximates derivatives by using differences between successive function values at regular intervals.
Formula for the first derivative using forward difference ():
The forward difference formula for the first derivative at is:
where is the step size, , , and are function values at , , and .
Formula for the second derivative using forward difference ():
The forward difference formula for the second derivative at is:
Step 2: Data and step size
Let’s assume we have a table of values for near . Here’s an example of a table:
76 | |
78 | |
80 |
Assume the values of , , and are provided, and the step size (since and ).
Step 3: Calculate the first and second derivatives
First Derivative:
Using the first derivative formula:
Substitute the values of , , and and to get the approximate first derivative.
Second Derivative:
Using the second derivative formula:
Substitute the values of , , and and to get the approximate second derivative.
Step 4: Truncation Error (TE)
The truncation error for the forward difference method depends on the next higher-order terms ignored in the Taylor series expansion.
Truncation error for the first derivative:
For the first derivative, the truncation error is proportional to :
where is some point in the interval and is the third derivative of the function at .
Truncation error for the second derivative:
For the second derivative, the truncation error is proportional to :
where is the fourth derivative of the function at .
Step 5: Actual Error
To compute the actual error, you would compare the forward difference approximation with the true values of the first and second derivatives (which may be given or calculated analytically from ).
If the true values of the derivatives are provided or known, this step can be carried out directly.
Example:
Let’s assume the following hypothetical values for the function at , , and :
76 | 12.0 |
78 | 15.5 |
80 | 19.0 |
First derivative calculation:
Second derivative calculation:
#CODEWITHCHIRAYUThank You 😊
Comments
Post a Comment