Feb-2011
Master
of Computer Application (MCA)
NAME:
Vivek Kumar Rai
Assignment
of second semester Feb-2011
Fully Solved
Fully Solved
ROLL
No:-
website:-
http://vivekraicdac.blogspot.in/
email-id
vivek.k.ray@gmail.com
ph:-
8880583241
1. Write a program in C++ for matrix multiplication. The program should accept the
February
2011
Master of
Computer Application (MCA) – Semester 2
MC0066 –
OOPS using C++
Assignment
Set – 1
1. Write a program in C++ for matrix multiplication. The program should accept the
dimensions
of both the matrices to be multiplied and check for compatibility
with appropriate
messages
and give the output
Ans:-
#include <iostream>
#include <fstream>
using namespace std;
int main()
{
ifstream infile("<strong
class="highlight">MATRIX</strong>.dat");
ofstream outfile ("RESULT.dat");
int m1[6][6], m2[6][6], M[6][6];
int i=0, j=0;
<strong
class="highlight"><vb_highlight>for</strong></vb_highlight>
(i=0; i<6; i++)
{
<strong
class="highlight"><vb_highlight>for</strong></vb_highlight>
(j=0; j<6; j++)
{
infile>>m1[i][j]>>m2[i][j];
M[i][j] = m1[0][j]*m2[i][0];
cout<<M[i][j]<<" ";4. What is
the purpose of exception handling? How do you infer from the phrase,
“Throwing
an exception”?.
} cout<<endl;
}
infile.close ();
outfile.close ();
return 0;
}
input:-
output:-
2.
Write a program to check whether a string is a palindrome or not.
Please note that palindrome is one which remains the same if you
reverse the characters in the string. For example "MADAM".
Answer:
#include <iostream>
#include <string>
using namespace std;
int main()
{
char str[100];
cout << "Enter word :";
cin >> str;
int x = strlen(str)-1;
for(int i = 0; i <= x; i++)
{
if (str[i] == str[x-i])
{
continue;
}
else
{
cout<<"Not a palidrome"<<endl;
return 0;
}
}
cout << "Indeed Palidrome"<<endl;
return 0;
}
#include <string>
using namespace std;
int main()
{
char str[100];
cout << "Enter word :";
cin >> str;
int x = strlen(str)-1;
for(int i = 0; i <= x; i++)
{
if (str[i] == str[x-i])
{
continue;
}
else
{
cout<<"Not a palidrome"<<endl;
return 0;
}
}
cout << "Indeed Palidrome"<<endl;
return 0;
}
- What is structure in C++? Define a structure named product with elements product code, description, unit price and qty in hand. Write a C++ program that implements the structure and enables to store at least 100 product data.
Answer:
A structure
type is a user-defined composite type. It is composed of fields or
members that can have different types. In C++, a structure is the
same as a class except that its members are public
by default.
Using
a Structure
In C, you must
explicitly use the struct
keyword to declare a structure. In C++, this is unnecessary once the
type has been defined.
You have the option
of declaring variables when the structure type is defined by placing
one or more comma-separated variable names between the closing brace
and the semicolon.
Structure variables
can be initialized. The initialization for each variable must be
enclosed in braces.
struct
Product {
char
mfg_id[4]; // 4 char code for the manufacturer.
char
prod_id[8]; // 8-char code for the product
int
price; // price of the product in dollars.
int
qty_on_hand; // quantity on hand in inventory
};
Product
prods[100]; // array to hold up to 100 products
int
n = 0; // number of products in the array.
while
(cin >> prods[n].mfg_id >> prods[n].prod_id
>>
prods[n].price >> prods[n].qty_on_hand) {
n++;
}
- What is the purpose of exception handling? How do you infer from the phrase, “Throwing an exception”?.
Answer:
An
exception is a situation in which a program has an unexpected
circumstance that the section of code containing the problem is not
explicitly designed to handle. In C++, exception handling is useful
because it makes it easy to separate the error handling code from the
code written to handle the chores of the program. Doing so makes
reading and writing the code easier.
The
designers of C++, Bell labs, extended it with exception handling
structures. The commands being used relate closely to the terms used
in exception handling (described above). The block of code you want
to try starts with specifying the try
command and surrounding the block with curly braces. Within this
block, you can throw any occurring errors with the throw
command. You must specify an error and this should be a class but we
will get to this later. Immediately after the try-block
is closed, the catch-block
starts. Here the error handling code is placed. The following piece
of pseudo code will show the idea:
try
{___...___...___throw
Exception()___...___...
} catch( Exception e )
{___...___...
}
} catch( Exception e )
{___...___...
}
In this example, Exception
is a defined class with a constructor with no parameters (as
identified by the throw-call). It would be useful to have some
info on what kind of error occurred. This could be done in two ways.
We could define different exception-classes and throw them according
to which error occurred. We also could give the class a parameter
containing an error message and allow the class to display the
message.
Class Level Relationships:
February
2011
Master of
Computer Application (MCA) – Semester 2
MC0067 –
Database Management System
(DBMS and
Oracle 9i)
Assignment
Set – 1
- Write about:
- Linear Search
- Collision Chain
Linear
search
Answer:
Linear
search, also known as sequential search, means starting at the
beginning of the data and checking each item in turn until either the
desired item is found or the end of the data is reached.
Algorithm
For
each item in the database
if
the item matches the wanted info
exit
with this item
Continue
loop
wanted item is not in database
Linear
search - code.
There
are two common forms of code for linear search. The difference is
what is returned as the result of the search. sometimes the data
itself is returned. essentially a database query
sometimes
the location of the data is returned. preparatory to a database
update Return the data itself
function
findstudent (adfa:in adfaresults; -- databasefindname:in snames) --
value sought
return
students
is
empty:students; -- null value, in case not found
begin
for
division in adfa'RANGE loop
for
student in divresults'RANGE loop
if
(adfa(division)(student).detail.sname=findname)then
return adfa(division)(student);
end
if;
end
for loop;
end
for loop;
return
empty;
end
findstudent;
Collision
Chain :
Answer:
A
Hash table is one of the simplest index structures which a database
can implement. The major components of a hash index are the "hash
function" and the "buckets". Effectively the DBMS
constructs an index for every table you create that has a PRIMARY KEY
attribute, like:
CREATE
TABLE test ( id INTEGER PRIMARY KEY, name varchar(100));
In
table test, we have decided to store 4 rows...
insert
into test values (1,'Gordon');
insert
into test values (2,'Jim');
insert
into test values (4,'Andrew');
insert
into test values (3,'John');
The algorithm
splits the places which the rows are to be stored into areas. These
areas are called buckets. If a row's primary key matches the
requirements to be stored in that bucket, then that is where it will
be stored. The algorithm to decide which bucket to use is called the
hash function. For our example we will have a nice simple hash
function, where the bucket number equals the primary key. When the
index is created we have to also decide how many buckets there are.
- Integrity Rules
2.Write
about:
- Relational Operators with examples for each
- Linear Search
- Collision Chain
Integrity
Rules.
Answer:
These
are the rules which a relational database follows in order to stay
accurate and accessible. These rules govern which operations can be
performed on the data and on the structure of the database.
There
are three integrity rules defined for a relational database, which
are:-
1.
Distinct Rows in a Table
- this rule says that all the rows of a table should be distinct to
avoid in ambiguity while accessing the rows of that table. Most of
the modern database management systems can be configured to avoid
duplicate rows.
2.
Entity Integrity (A Primary Key or part of it cannot be null)
- this rule says that 'null' is special value in a relational
database and it doesn't mean blank or zero. It means the
unavailability of data and hence a 'null' primary key would not be a
complete identifier. This integrity rule is also termed as entity
integrity.
- Referential Integrity - this rule says that if a foreign key is defined on a table then a value matching that foreign key value must exist as the primary key of a row in some other table.
Relational
Operators with examples for each
Answer:
Relational
database supports basic database operations in order to provide
useful means for retrieving or manipulating data in tables. Because
the relational model has its mathematical basis upon the relational
theory (by thinking tables assets or relations), the supported
database operators conform to existing operators in relational
algebra.
In
fact, a relational database software implementation, called DBMS, is
said to have higher degree of relational completeness depending upon
the extent to which relational algebra operators are supported.
In
total there are eight operators are found in relational theory,
namely SELECT, PROJECT, JOIN, and INTERSECT, UNION, DIFFERENCE,
PRODUCT and DIVIDE. Minimally speaking, a DBMS implementation is said
to be relational if it supports at least the key relational
operators, namely SELECT, PROJECT, and JOIN. Very few DBMSs are
capable of supporting all eight relational operators. Use of
relational algebra operators on existing tables (relations) results
in outcomes look like new relations. This characteristic lets the
user recursively applying the operators among the operator outcomes.
Linear
search
Answer:
Linear
search, also known as sequential search, means starting at the
beginning of the data and checking each item in turn until either the
desired item is found or the end of the data is reached.
Algorithm
For
each item in the database
if
the item matches the wanted info
exit
with this item
Continue
loop
wanted item is not in database
Linear
search - code
There
are two common forms of code for linear search. The difference is
what is returned as the result of the search.
sometimes
the data itself is returned.
essentially
a database query
sometimes
the location of the data is returned.
preparatory
to a database update
Return
the data itself
function
findstudent (adfa:in adfaresults; -- database
findname:in
snames) -- value sought
return
students
is
empty:students;
-- null value, in case not found
begin
for
division in adfa'RANGE loop
for
student in divresults'RANGE loop
if
(adfa(division)(student).detail.sname=findname)then
return adfa(division)(student);
end
if;
end
for loop;
end
for loop;
return
empty;
end
findstudent;
Collision
Chain :
Answer:
A
Hash table is one of the simplest index structures which a database
can implement. The major components of a hash index are the "hash
function" and the "buckets". Effectively the DBMS
constructs an index for every table you create that has a PRIMARY KEY
attribute, like: Linear Search: Linear search is an important process
of Hashing. While inserting a new record, if it is found that the
location at the hash address is already occupied by a previously
inserted record, search for the next free location available at the
disk and store the record at this location. A pointer from the first
record at the original hash address to the new record will also be
stored. During retrieval, the hash address is computed to locate the
record. When it is seen that the record is not available at the hash
address, the pointer from the record of that address is followed to
locate the required record. In this method, the overhead incurred is
the time taken for the linear search to locate the next free location
while inserting a record.
Collision
Chain: Unlike the linear search in collision chain, the hash address
location contains the head of a list of pointers linking together all
records which hash to that address. In this method the overflow area
needs to be used if the number of records mapping on the same address
exceeds the number of locations linked to it.
Integrity
Rules: The following are the integrity rules which must be satisfied
by every relation.
No
component of the Primary key can be null.
The
database must not contain any unmatched foreign key values. This is
called referential integrity rule.
Unlike
the case of Primary keys there are no integrity rules, seeing that no
component of foreign keys can be null.
Types
of Relational Operators:
Traditional
Set Operators
Union:
Example:
S1={1,2,3,4,5}, S1={4,5,6,7,8}
S1UnionS2=
{1, 2, 3, 4, 5, 6, 7, 8}
Intersection
Example:
S1={1,2,3,4,5}, S1={4,5,6,7,8}
S1IntersectionS2=
{4, 5}
Difference
Example:
S1={1,2,3,4,5}, S1={4,5,6,7,8}
S1-S2=
{1, 2, 3}
Cartesian
Product
Example:
S1= {1, 2, 3} S2= {4, 5, 6}
S1XS2={(1,
4), (1, 5), (1, 6), (2, 4), (2, 5), (2, 6), (3, 4), (3, 5), (3, 6)
Special
Operators
Selection:
The selection operator yields a horizontal subset of a given relation
that is, that subset of tuples or rows of table should be selected
with in a given relation for which a particular condition is
satisfied. In simple words it selects rows with all the columns.
Projection:
The projection operation on a table simply forms another table by
copying specified columns from the original table, eliminating any
duplicate row. The projection operator yields a vertical subset of a
given relation- that is, the subset obtained by selecting specified
attributes, in a specified left to right order and then eliminating
duplicate tuples.
Join:
the most general form of join used is called as theta join, where
theta has the same meaning as “compare with”. That is, it stands
for any of the comparative operators equal, not equal, greater than
and so forth. Theta join is performed on two relations, which have
one or more columns in common.
Division:
The division operator divides a dividend relation A of degree m+n by
a divisor relation B of degree n, and produces a resultant relation
of degree m.
Linear
Search: Linear search is an important process of Hashing. While
inserting a new record, if it is found that the location at the hash
address is already occupied by a previously inserted record, search
for the next free location available at the disk and store the record
at this location. A pointer from the first record at the original
hash address to the new record will also be stored. During retrieval,
the hash address is computed to locate the record. When it is seen
that the record is not available at the hash address, the pointer
from the record of that address is followed to locate the required
record. In this method, the overhead incurred is the time taken for
the linear search to locate the next free location while inserting a
record.
Collision
Chain: Unlike the linear search in collision chain, the hash address
location contains the head of a list of pointers linking together all
records which hash to that address. In this method the overflow area
needs to be used if the number of records mapping on the same address
exceeds the number of locations linked to it.
Correspondences
between the ER model constructs and the relational model constructs:
E-R
Model
Relational
Model
Entity
type
“Entity”
relation
1:1
or 1:N relationship type
Foreign
key(or “Relationship” relation)
M:N
relationship type
“Relationship”relation
and two foreign keys
n-ary
relationship type
“Relationship”
relation and n- foreign keys
Simple
Attribute
Attribute
Composite
Attribute
Set
of simple component Attributes
Multivalued
Attributes
Relation
and foreign key
Value
Set
Domain
Key
Attribute
Primary
(or Secondary) key
There
are several options for mapping a number of subclasses that together
form a specialization (or alternatively, that are generalized into a
superclass), such as the {SECRETARY, TECHNICIAN, ENGINEER} subclasses
of EMPLOYEE We can add a further step to our ER-to-relational mapping
algorithm, which has seven steps, to handle the mapping of
specialization. Step 8, which follows, gives the most common options;
other mappings are also possible. We then discuss the conditions
under which each option should be used. We use Attrs(R) to denote the
attributes of relation R, and PK(R) to denote the primary key ofR.
Options
for Mapping Specialization or Generalization: Convert each
specialization with m subclasses {S,, S2, • • •, Sm} and
(generalized) superclass C, where the attributes of C are {k, a,, ...
a,,} and k is the (primary) key, into relation schemas using one of
the four following options
Multiple
relations – Superclass and subclasses: Create a relation L for C
with attributes Attrs(L) = {k, at,. .., an} and PK(L) = k. Create a
relation L; for each subclass S,, 1 < i < m, with the
attributes Attrs(LJ = {k} U {attributes of SJ and PK(L,) = k. This
option works for
any
specialization (total or partial, disjoint or over-lapping).
Multiple
relations – Subclass relations only: Create a relation L, for each
subclass S;, 1 < i < m, with the attributes Attrs(L>) =
{attributes of SJ U {k, a{, . . ., a,,} and PK(L,) = k. This option
only works for a specialization whose subclasses are total (every
entity in the superclass must belong to (at least) one of the
subclasses).
Single
relation with one type attribute: Create a single relation L with
attributes Attrs(L) = {k, a,, .... a,,} U {attributes of S,} U . . .
U {attributes of SJ U {t} and PK(L) = k. The attribute t is called a
type (or discriminating) attribute that indicates the subclass to
which each tuple belongs, if any. This option works only for a
specialization whose subclasses are disjoint, and has the potential
for generating many null values if many specific attributes exist in
the subclasses.
Single
relation with multiple type attributes: Create a single relation
schema L with attributes Attrs(L) = {k, a,,..., a,} U {attributes of
S,} U .. .U {attributes of S,J U {t,, t2,..., tm} and PK(L) = k. Each
t,, 1 < i < m, is a Boolean type attribute indicating whether a
tuple belongs to subclass S,. This option works for a specialization
who sesubclasses are overlapping (but will also work for a disjoint
specialization).
a)
Disk: Disk storage or disc storage is a general category of storage
mechanisms, in which data are digitally recorded by various
electronic, magnetic, optical, or mechanical methods on a surface
layer deposited of one or more planar, round and rotating platters. A
disk drive is a device implementing such a storage mechanism with
fixed or removable media; with removable media the device is usually
distinguished from the media as in compact disc drive and the compact
disc. Notable types are the hard disk drive (which contain a
non-removable disc), the floppy disk drive and its removable floppy
disk, various optical disc drives and associated media.
b)
Disk Pack: A Disk pack is a layered grouping of hard disk platters
(circular, rigid discs coated with a magnetic data storage surface).
Disk pack is the core component of a hard disk drive. In modern hard
disks, the disk pack is permanently sealed inside the drive. In many
early hard disks, the disk pack was a removable unit, and would be
supplied with a protective canister featuring a lifting handle.
The
protective cover consisted of two parts, a clear plastic shell, with
a handle in the center, that enclosed the top and sides of the disks
and a separate bottom that completed the sealed package. To remove
the disk pack, the drive would be taken off line and allowed to spin
down. Its access door could then be opened and an empty top shell
inserted and twisted to unlock the disk platter from the drive and
secure it to the top shell. The assembly would then be lifted out and
the bottom cover attached. A different disk pack could then be
inserted by removing the bottom and placing the disk pack with its
top shell into the drive. Turning the handle would lock the disk pack
in place and free the top shell for removal.
c)
Track: Cylinders contain tracks, which are circular paths on the
surface of a disk or diskette on which information is magnetically
recorded and from which recorded information is read. Tracks are in
count-key-data (CKD) format, which means that each track contains
fields that indicate the start of the track and the space used,
followed by records containing three fields:
The
count field defines the length of the record
The
key field contains optional accounting information
d)
Blocks: Records can be grouped into data blocks, which are the units
of recording on disk. Blocking makes processing more efficient
because z/OS can access an entire block at once instead of reading or
writing records individually. Block size (abbreviated as BLKSIZE) is
the physical block size written on the disk for fixed (F) and fixed
block (FB) records. For variable and undefined (V, VB, and U)
records, block size is the maximum physical block size that can be
used for the data set.
e)
Cylinder: A disk drive contains cylinders. A cylinder is a unit of
storage on a count-key-data (CKD) device with a fixed number of
tracks.
f)
Sector: A sector is a subdivision of a track on a magnetic disk or
optical disc. Each sector stores a fixed amount of user data.
Traditional formatting of these storage media provides space for 512
bytes (for magnetic disks) or 2048 bytes (for optical discs) of
user-accessible data per sector. Newer hard drives use 4096 byte
(4 KB or 4K) Advanced Format sectors.
g)
Inter Block Gap: In a typical format, data is written to tape in
blocks with inter-block gaps between them, and each block is written
in a single operation with the tape running continuously during the
write. However, since the rate at which data is written or read to
the tape drive is not deterministic, a tape drive usually has to cope
with a difference between the rate at which data goes on and off the
tape and the rate at which data is supplied or demanded by its host.
h)
read/write head: Disk read/write heads are the small parts of a disk
drive, that move above the disk platter and transform platter's
magnetic field into electrical current (read the disk) or vice versa
– transform electrical current into magnetic field (write the
disk). The heads have gone through a number of changes over the
years.
CREATE
TABLE test ( id INTEGER PRIMARY KEY, name varchar(100));
In
table test, we have decided to store 4 rows...
insert
into test values (1,'Gordon');
insert
into test values (2,'Jim');
insert
into test values (4,'Andrew');
insert
into test values (3,'John');
The algorithm
splits the places which the rows are to be stored into areas. These
areas are called buckets. If a row's primary key matches the
requirements to be stored in that bucket, then that is where it will
be stored. The algorithm to decide which bucket to use is called the
hash function. For our example we will have a nice simple hash
function, where the bucket number equals the primary key. When the
index is created we have to also decide how many buckets there are.
- Discuss the correspondences between the ER model constructs and the relational model constructs. Show how each ER model construct can be mapped to the relational model, and discuss any alternative mappings.
Answer:
There
are several options for mapping a number of subclasses that together
form a specialization (or alternatively, that are generalized into a
superclass), such as the {SECRETARY, TECHNICIAN, ENGINEER} subclasses
of EMPLOYEE We can add a further step to our ER-to-relational mapping
algorithm, which has seven steps, to handle the mapping of
specialization. Step 8, which follows, gives the most common options;
other mappings are also possible. We then discuss the conditions
under which each option should be used. We use Attrs(R) to denote the
attributes
of
relation
R,
and PK(R) to denote the primary
key ofR.
Options for
Mapping Specialization or Generalization: Convert
each specialization with m subclasses {S,, S2, • • •, Sm} and
(generalized) superclass C, where the attributes of C are {k, a,, ...
a,,} and k
is
the (primary) key, into relation schemas using one of the four
following options
Option 8A:
Multiple
relations
– Superclass and subclasses: Create
a relation L for C with attributes Attrs(L) = {k,
at,.
.., an}
and
PK(L)
= k.
Create a relation L; for each subclass S,, 1 < i
< m,
with the attributes Attrs(LJ = {k}
U
{attributes of SJ and PK(L,) = k.
This
option works for
any specialization
(total or partial, disjoint or over-lapping).
Option 8B:
Multiple
relations
– Subclass
relations
only: Create
a relation L, for each subclass S;, 1 < i
< m,
with the attributes Attrs(L>) = {attributes of SJ U {k,
a{, . . .,
a,,} and PK(L,) = k.
This
option only works for a specialization whose subclasses are total
(every
entity in the superclass must belong to (at least) one of the
subclasses).
Option 8C:
Single relation with one type
attribute: Create
a single relation L with attributes Attrs(L) = {k, a,, .... a,,} U
{attributes of S,} U . . . U {attributes of SJ
U
{t} and PK(L) = k. The attribute t is called a type
(or
discriminating) attribute that indicates the subclass to which each
tuple belongs, if any. This option works only for a specialization
whose subclasses are disjoint,
and
has the potential for generating many null values if many specific
attributes exist in the subclasses.
Option 8D:
Single relation with
multiple type
attributes: Create a single relation schema L with attributes
Attrs(L) = {k, a,,..., a,}
U {attributes of S,} U
.. .U {attributes of S,J U {t,, t2,..., tm} and PK(L) = k. Each t,, 1
< i <
m, is a Boolean type
attribute indicating
whether a tuple belongs to subclass S,. This option works for a
specialization who sesubclasses are overlapping
(but will also work
for a disjoint specialization).
- Define the following terms: disk, disk pack, track, block, cylinder, sector, interblock gap, read/write head.
Answer:
Disk: Disk
storage
or disc
storage
is a general category of storage mechanisms, in which data are
digitally recorded by various electronic, magnetic, optical, or
mechanical methods on a surface layer deposited of one or more
planar, round and rotating platters. A disk
drive
is a device implementing such a storage mechanism with fixed or
removable media; with removable media the device is usually
distinguished from the media as in compact disc drive and the compact
disc. Notable types are the hard
disk drive (which contain a non-removable disc), the floppy
disk drive and its removable floppy
disk, various optical
disc drives and associated
media.
Disk Pack:-A
Disk
pack
is a layered grouping of hard
disk platters
(circular, rigid discs coated with a magnetic
data
storage
surface). Disk pack is the core component of a hard
disk
drive. In modern hard disks, the disk pack is permanently sealed
inside the drive. In many early hard disks, the disk pack was a
removable unit, and would be supplied with a protective canister
featuring a lifting handle.
The protective cover
consisted of two parts, a clear plastic shell, with a handle in the
center, that enclosed the top and sides of the disks and a separate
bottom that completed the sealed package. To remove the disk pack,
the drive would be taken off line and allowed to spin down. Its
access door could then be opened and an empty top shell inserted and
twisted to unlock the disk platter from the drive and secure it to
the top shell. The assembly would then be lifted out and the bottom
cover attached. A different disk pack could then be inserted by
removing the bottom and placing the disk pack with its top shell into
the drive. Turning the handle would lock the disk pack in place and
free the top shell for removal.
Track:
A
ring on a disk
where data
can be written. A typical floppy
disk has 80 (double-density)
or 160 (high-density)
tracks. For hard
disks, each platter
is divided into tracks, and a single track location that cuts through
all platters (and both sides of each platter) is called a cylinder.
Hard disks have many thousands of cylinders.
Each
track is further divided into a number of sectors.
The operating
system and disk
drive remember where information is stored
by noting its track and sector numbers.
Block:-A
block is a contiguous
set of bits or bytes that forms an identifiable unit of data. The
term is used in database management, word processing, and network
communication.
1) In some
databases,
a block is the smallest amount of data that a program can request. It
is a multiple of an operating system block, which is the smallest
amount of data that can be retrieved from storage or memory. Multiple
blocks in a database comprise an extent.
- In word processing, a block is a contiguous set of
characters. Often it consists of a phrase, a sentence, a paragraph,
or a set of paragraphs that is selected by the user for
copying/pasting, cutting, or moving. But a block can consist of any
contiguous set of characters, whether or not it forms a logical unit
of text.
- In network communication, a block is a group of data bits or
bytes that is transferred as a standard unit. The size (or length)
of such a block depends on the communications protocol.
Cylinder:-A
single track
location on all the platters
making up a hard
disk. For example, if a hard disk has four platters, each with
600 tracks, then there will be 600 cylinders, and each cylinder will
consist of 8 tracks (assuming that each platter has tracks on both
sides).
Sector:-
The
smallest unit that can be accessed on a disk.
When a disk undergoes a low-level format,
it is divided into tracks
and sectors. The tracks are concentric circles around the disk and
the sectors are segments within each circle. For example, a formatted
disk might have 40 tracks, with each track divided into 10 sectors.
The operating system and disk
drive keep tabs on where information is stored
on the disk by noting its track and sector number.
Modern hard
disk drives use a technique called zoned-bit
recording in which tracks on the outside of the disk contain
more sectors than those on the inside .
Interblock
Gap :-Short
for interrecord
gap,
the space between two consecutive physical blocks
on a data recording medium, such as a hard
drive or a magnetic
tape. IRGs are used as markers for the end of data and also as
safety margins for data overwrites.An interrecord
gap is also referred to as a interblock gap.
Read/Write
Head:- Disk
read/write heads
are the small parts of a disk drive, that move above the disk platter
and transform platter's magnetic field into electrical current (read
the disk) or vice versa – transform electrical current into
magnetic field (write the disk). The heads have gone through a number
of changes over the years.
February
2011
Master of
Computer Application (MCA) – Semester 2
MC0068
– Data Structures using C
Assignment
Set – 1
- Describe the theory and applications of Double Ended Queues (Deque) and circular queue
Answer:
Another
type of queue called double-ended queue also called Deque is
discussed in this section. Deque is a special type of data structure
in which insertions and deletions will be done either at the front
end or at the rear end of the queue. The operations that can be
performed on deques are
· Insert an item
from front end
· Insert an item
from rear end
· Delete an item
from front end
· Delete an item
from rear end
· Display the
contents of queue
Example
: Double-ended queue
#include
<stdio.h>
#include
<process.h>
#define
QUEUE_SIZE 5
void
main()
{
int
choice,item,f,r,q [10];
f=0;
r = -1;
for
(;;)
{
printf("
1:Insert_front 2:lnsert_rearn");
printf("3:
Delete_front 4: Delete_rearn" );
printf("5:
Display 6:Exitn");
printf("Enter
the choicen");
scanf("%d"
,&choice );
switch
( choice )
{
case
1:
printf("Enter
the item to be insertedn");
scanf("%d",&
item);
insert_
front(item, q, &f, &r);
break;
case
2:
printf("Enter
the item to be insertedn");
scanf("%d",&
item);
insert_rear(item,
q, &r);
break;
case
3:
delete
_front(q, &f, &r);
break;
case
4:
delete_rear(q,
&f, &r);
break;
cases
5:
display(q,
f, r);
break;
default:
.
exit(0);
}
}
}
Circular
Queue A
circular queue is a particular implementation of a queue.
It is very efficient. It is also quite useful in low level code,
because insertion and deletion are totally independant, which means
that you don't have to worry about an interrupt handler trying to do
an insertion at the same time as your main code is doing a deletion.
There
is a problem with this: Both an empty queue and a full queue would be
indicated by having the head and tail point to the same element.
There are two ways around this: either maintain a variable with the
number of items in the queue, or create the array with one more
element that you will actually need so that the queue is never full
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
#define
MAX
80
char
buf[MAX+
1
];
int
spos
=
0
;
int
rpos=
0
;
void
qstore(
char
q);
char
qretrieve(
void
);
int
main(
void
)
{
register
char
ch;
int
t;
buf[
80
]=
'\0'
;
/*
Input characters until a carriage return is typed.*/
for
(ch=
'
'
,t=
0
;
t<
32000
&&
ch!=
'\r'
;++t)
{
if
(_kbhit())
{
ch=_getch();
qstore(ch);
}
printf(
"%d"
,t);
if
(ch==
'\r'
)
{
/*Display
and empty the key
buffer.*/
printf(
"\n"
);
while
((ch=qretrieve())!=
'\0'
)
{
printf(
"%c"
,ch);
printf(
"\n"
);
}
}
}
return
0
;
}
/*
Store characters in the queue. */
void
qstore(
char
q)
{
if
(spos+
1
==rpos||(spos+
1
==MAX
&& !rpos))
{
printf(
"List
Full\n"
);
return
;
}
buf[spos]=q;
spos++;
if
(spos==MAX)spos=
0
;
/*loop
back */
}
/*
Retrieve a character.*/
char
qretrieve(
void
)
{
if
(rpos==MAX)
rpos=
0
;
/*loop
back*/
if
(rpos==spos)
return
'\0'
;
rpos++;
return
buf[rpos-
1
];
}
- Illustrate the C program to represents the Stack Implementation on POP and PUSH operation?
Answer:
C
program to represents the Stack Implementation on POP and PUSH
operation
#include<iostream.h>
#include<fstream.h>
#include"apstring.h"
#include"apvector.h"
#include"apstring.cpp"
class
STACK
{
private:
int
top,
stack[20];
public:
STACK();
push();
pop(int x);
int current()
{
return top;
}
}
STACK::STACK()
{
top --;
int
x =
stack[top];
return
x;
}
//main
int main()
{
int
pushnum;
apstring choice;
STACK a;
do{
cout<<"Enter
'push', 'pop', or 'q': ";
cin>>choice;
if(choice == "q")
{
system("PAUSE");
}
if(choice == "pop")
{
STACK pop();
{
cout<<"Current
number is: "<<a.current();
}
if(choice == "push")
{
cout<<"\n\nEnter
number to push: ";
cin>>pushnum;
STACK push(pushnum);
cout<<"\n\nCurrent
number iSTACK::STACK()s: "<<pushnum;
}
}
}while(choice != "q");
system("PAUSE");
return 1;
}
3.Show
the result of inserting 3, 1, 4, 5, 2, 9, 6, 8 into a
A)
bottom-up splay tree
B)
top-down splay tree.
Answer:
The advantage of implementing splaying using rotation as
a
primitive is that we can encapsulate all the restructuring, including
any required updating of auxiliary fields, in the rotation
procedures. The disadvantage is that many unnecessary pointer
assignments are done. We can achieve
greater efficiency by eliminating the superfluous assignments by
expanding the rotation procedures in-line, simplifying, and keeping
extra information in the control state. A bottom-up splaying method
is appropriate if we have direct access to the node at which the
splaying is to occur. However, splaying as used in Sections 2 and 3
only occurs
immediately
after an access, and it is more eficient to splay on the way down the
tree during the access. We present a topdown version of splaying that
works in this way.
Suppose we wish to
aocess an item i
in
a tree. During the access and concurrent splaying operation, the tree
is broken into three parts: a left
tree,
a
middle
tree, and
a right
tree. The
left and right trees contain
all
items in the original tree so far known to be less than i
and
greater than i,
respectively.
The middle tree consists of the subtree of the original tree rooted
at the current node on the access path. Initially the current node is
the tree root and the left and right trees are empty. To do the
splaying, we search down from the root looking for i,
two
nodes at a time, breaking links along the access path and adding each
subtree thus detached to the bottom right of the left tree
or the
bottom left of the right tree, with the provisothat if we take two
steps down in the same direction (left-left or right-right) we do a
rotation before breaking a link. More formally, the splaying
operation consists of repeatedly applying the appropriate case among
until none applies, then reassembling the three remaining trees.
A similar
togdown restructuring method, but without the rotations and
consequently with poor amortized efficiency, was proposed by
Stephenson. The following program implements this method. The program
uses
three
local variables, t,
1, and
r,
denoting
the current vertex, the last node in symmetric order
in the left tree,
and the first node in symmetric order in the right tree,
respectively. The updating of the% variables occurs in the primitive
procedures. These are rotate
ldt and
rotate
righf, which
ro$ate the edge joining t
and
’its right or left child, respectively; link
ldt and
link
right, which
break the link between t
and
its left or right child and attach the resulting tree to the left or
right tree, respectively; and
assemble,
which
completes the splaying by performing the assembly. The program
contains an initializing guard that declares 1
and
r
and
initializes Self-Adjusting
Binary Search Trees 669
them both to null.
After the first le A link, the right child of null is
the
root of the left tree; after the first right link, the left child of
null is the root of the right tree. procedure
topdown
splay(i, t);
if 1,
r: 1 =
r =
null
+
left(nul1):=
right(nul1)
:=
null;
do i
<
item@3)
if i
=
item(ldf(t)+)
link right
I i
c
item(left(t)+)
rotate right; link right
I i
>
item(left(t)+)
link right; link left
fi
I i
>
item(t)
-P
if i
=
item(right(t)+)
link leji
I i
>
item(right(t)+)
rotate left;l ink left
I i
<
item(right(t)+)
link left; link right
fi
od{i =
item(t));
assemble
end topdown
splay;
We can simplify
topdown splaying as we did bottom-up splaying, by eliminating the
second linking operation in the zig-zag case. The resulting program
is as follows:
procedure simple
topdown splafli, t);
if 1,
r:
1 =
r =
null
-.,
left(nul1):=
right(nul1) := null;
do i
<
item(t)
+
if i
<
item(1ef.t))
+
rotate
right
I i
2
item(leff(t)+)
skip
fi;
link right
I i
>
item(t)
+
if i
>
item(right(t)+)
rotate left
I i
5
item(right(f)+)
skip
fi;
link left
od{i =
ztem(t));
assemble
fi
end simple
topdown splay.
- Discuss the techniques for allowing a hash file to expand and shrink dynamically. What are the advantages and disadvantages of each?
Answer:
Dynamic Hashing
As
the database grows over time, we have three options:
·
Choose hash function based on current file size. Get performance
degradation as file grows.
·
Choose hash function based on anticipated file size. Space is wasted
initially.
·
Periodically re-organize hash structure as file grows. Requires
selecting new hash function, recomputing all addresses and generating
new bucket assignments. Costly, and shuts down database.
Some
hashing techniques allow the hash function to be modified dynamically
to accommodate the growth or shrinking of the database. These are
called dynamic hash functions. Extendable hashing is one form of
dynamic hashing. Extendable hashing splits and coalesces buckets as
database size changes. This imposes some performance overhead, but
space efficiency is maintained. As reorganization is on one bucket at
a time, overhead is acceptably low.
1.
We choose a hash function that is uniform and random that generates
values over a relatively large range.
·
Range is b-bit binary integers (typically b=32).
·
2^32 is over 4 billion, so we don’t generate that many buckets!
·
Instead we create buckets on demand, and do not use all b bits of the
hash initially.
·
At any point we use i bits where 0<= i <= 6
·
The i bits are used as an offset into a table of bucket addresses.
·
Value of i grows and shrinks with the database.
·
Figure 6.16 shows an extendable hash structure.
·
Note that the i appearing over the bucket address table tells how
many bits are required to determine the correct bucket.
·
It may be the case that several entries point to the same bucket.
·
All such entries will have a common hash prefix, but the length of
this prefix may be less than i.
·
So we give each bucket an integer giving the length of the common
hash prefix.
·
This is shown in Figure – B as i^j
·
Number of bucket entries pointing to bucket j is then 2^(i-ij)
1.
To find the bucket containing search key value k^i.
·
Compute f^ki.
·
Take the first i high order bits of h^(k1).
·
Look at the corresponding table entry for this i-bit string.
·
Follow the bucket pointer in the table entry.
2.
We now look at insertions in an extendable hashing scheme.
·
Follow the same procedure for lookup, ending up in some bucket j.
·
If there is room in the bucket, insert information and insert record
in the file.
·
If the bucket is full, we must split the bucket, and redistribute the
records.
·
If bucket is split we may need to increase the number of bits we use
in the hash.
3.
Two cases exist:
i.
If i=j , then only one entry in the bucket address table points to
bucket j.
·
Then we need to increase the size of the bucket address table so that
we can include pointers to the two buckets that result from splitting
bucket j.
·
We increment i by one, thus considering more of the hash, and
doubling the size of the bucket address table.
·
Each entry is replaced by two entries, each containing original
value.
·
Now two entries in bucket address table point to bucket j.
·
We allocate a new bucket z, and set the second pointer to point to z.
·
Set i^j and i^2 to i.
·
Rehash all records in bucket j which are put in either j or z.
·
Now insert new record.
·
It is remotely possible, but unlikely, that the new hash will still
put all of the records in one bucket.
·
If so, split again and increment i again.
ii.
If i>j , then more than one entry in the bucket address table
points to bucket j.
·
Then we can split bucket j without increasing the size of thi. If i=j
, then only one entry in the bucket address table points to bucket j.
·
Then we need to increase the size of the bucket address table so that
we can include pointers to the two buckets that result from splitting
bucket j.
·
We increment i by one, thus considering more of the hash, and
doubling the size of the bucket address table.e bucket address table
(why?).
·
Note that all entries that point to bucket j correspond to hash
prefixes that have the same value on the leftmost i1 bits.
·
We allocate a new bucket z, and set i1 and i2to the origina i1value
plus 1.
·
Now adjust entries in the bucket address table that previously
pointed to bucket j.
·
Leave the first half pointing to bucket j, and make the rest point to
bucket z.
·
Rehash each record in bucket j as before.
·
Reattempt new insert.
4.
Note that in both cases we only need to rehash records in bucket j.
5.
Deletion of records is similar. Buckets may have to be coalesced, and
bucket address table may have to be halved.
Advantages:
·
Extendable hashing provides performance that does not degrade as the
file grows.
·
Minimal space overhead – no buckets need be reserved for future
use. Bucket address table only contains one pointer for each hash
value of current prefix length.
Disadvantages:
·
Extra level of indirection in the bucket address table
·
Added complexity
February
2011
Master of
Computer Application (MCA) – Semester 2
MC0069 –
System Analysis & Design
Assignment
Set – 1
- Write and explain important relationships that are used in object-oriented modeling?
Ans:
A relationship is a general term covering the specific types
of logical connections found on class and objects diagrams. A
relationship references one or more related elements. There is no
general notation for a relationship. In most cases the notation is
some kind of a line connecting related elements. Specific subclasses
of the relationship define their own notation.
Instance Level
Relationships -
Association:
Association is a relationship between classifiers which is
used to show that instances of classifiers could be either linked to
each other or combined logically or physically into some aggregation.
Association defines the relationship between
two or more classes in the System. These generally relates to the one
object having instance or reference of another object inside it.
Associations
in UML can be implemented using following ways:1)
Multiplicity
2) Aggregation
3) Composition
2) Aggregation
3) Composition
Multiplicity
:
Multiplicity indicates the no of instance of one class is linked to one instance of another class. The various multiplicity values are listed below:
Multiplicity indicates the no of instance of one class is linked to one instance of another class. The various multiplicity values are listed below:
- NotationDescription1Only One Instance0..1Zero or One Instance*Many Instance0..*Zero or Many Instance1..*One or Many Instance
Aggregation:
Aggregation
represents the set of Main Classes that are dependent on Sub-Classes,
the Main class cannot exist without Sub-Class but the Sub-Class can
exists without the Main Class.
Aggregation
(shared aggregation) is a "weak" form of aggregation
when part instance is independent of the composite:
The same
(shared) part could be included in several composites, and if
composite is deleted, shared parts may still exist. Shared
aggregation is shown as binary association decorated with a hollow
diamond as a terminal adornment at the aggregate end of the
association line. The diamond should be noticeably smaller than the
diamond notation for N-ary associations.
Search Service
has a Query
Builder
using shared aggregation
Composition
in UML: Composition represents the set of Main Classes that are
dependent on Sub-Classes, the Main class cannot exists without
Sub-Class and the Sub-Class cannot exists without the Main Class. The
Sub-Class can represent only one composite relation with the Main
class.
Composition
:
composite
aggregation
is a "strong" form of aggregation. Composition
requirements/features listed in UML specification are:
- it is a whole/part relationship,
- it is binary association,
- part could be included in at most one composite (whole) at a time, and
- if a composite (whole) is deleted, all of its composite parts are "normally" deleted with it.
Note, that UML does
not define how, when and specific order in which parts of the
composite are created. Also, in some cases a part can be removed from
a composite before the composite is deleted, and so is not
necessarily deleted as part of the composite.
Composite
aggregation is depicted as a binary association decorated with a
filled
black diamond
at the aggregate (whole) end.
Folder could contain
many files, while each File has exactly one Folder parent.
If Folder is deleted, all contained Files are deleted as well.
If Folder is deleted, all contained Files are deleted as well.
When composition is
used in domain
models,
both whole/part relationship as well as event of composite "deletion"
should be interpreted figuratively, not necessarily as physical
containment and/or termination. UML specification needs to be updated
to explicitly allow this interpretation.
Class Level Relationships:
Generalization:
The Generalization relationship indicates that one of the two
related classes (the subclass) is considered to be a
specialized form of the other (the super type) and superclass
is considered as 'Generalization' of subclass. In
practice, this means that any instance of the subtype is also an
instance of the superclass. An exemplary tree of generalizations of
this form is found in binomial nomenclature: human beings are a
subclass of simian, which are a subclass of mammal, and so on. The
relationship is most easily understood by the phrase 'an A is a B' (a
human is a mammal, a mammal is an animal).
The
UML graphical representation of a Generalization is a hollow triangle
shape on the superclass end of the line (or tree of lines) that
connects it to one or more subtypes.
The
generalization relationship is also known as the inheritance or "is
a" relationship.
The
superclass in the generalization relationship is also known as the
"parent", superclass, base class, or
base type.
The
subtype in the specialization relationship is also known as the
"child", subclass, derived class,
derived type, inheriting class, or inheriting type.
Note that this relationship bears no resemblance to the biological
parent/child relationship: the use of these terms is extremely
common, but can be misleading.
- Generalization-Specialization relationship
A is a type of B
E.g. "an oak is
a type of tree", "an automobile is a type of vehicle"
Realization:
In UML modeling, a realization relationship is a relationship
between two model elements, in which one model element (the client)
realizes (implements or executes) the behavior that the other model
element (the supplier) specifies. A realization is indicated by a
dashed line with an unfilled arrowhead towards the supplier.
Realizations can only be shown on class or component diagrams.
A realization is a
relationship between classes, interfaces, components, and packages
that connects a client element with a supplier element. A realization
relationship between classes and interfaces and between components
and interfaces shows that the class realizes the operations offered
by the interface.
Dependency:
Dependency
relationship is used to show that some element or a set of elements
depends on other model element and on class diagrams is usually
represented by usage dependency or abstraction. A dependency is
rendered as a dashed arrow between elements. The element at the tail
of the arrow (the client) depends on the model element at the
arrowhead (the supplier). The arrow may be labeled with an optional
stereotype and an optional name.
Data Access depends on Connection Pool: It is possible
to have a set of elements for the client or supplier. In this case,
one or more arrows with their tails on the clients are connected to
the tails of one or more arrows with their heads on the suppliers. A
small dot can be placed on the junction if desired. A note on the
dependency should be attached at the junction point.
- Discuss the concept of dynamic type modeling and how is different from static type?
Answer:
UML diagrams
represent two different views of a system model:
- Static (or structural) view: emphasizes the static structure of the system using objects, attributes, operations and relationships. The structural view includes class diagrams and composite structure diagrams..
- Dynamic (or behavioral) view: emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and state machine diagrams.
- Most object-oriented programming languages are statically typed, which means that the type of an object is bound at the time the object is created. Even so, that object will likely play different roles over time. This means that clients that use that object interact with the object through different sets of interfaces, representing interesting, possibly overlapping, sets of operations.
- Modeling the static nature of an object can be visualized in a class diagram. However, when you are modeling things like business objects, which naturally change their roles throughout a workflow; it’s sometimes useful to explicitly model the dynamic nature of that object’s type. In these circumstances, an object can gain and lose types during its life.
- To model a dynamic type,
- Specify the different possible types of that object by rendering each type as a class stereotyped as type (if the abstraction requires structure and behavior) or as interface (if the abstraction requires only behavior).
Model all the
roles the class of the object may take on at any point in time. You
can do so in two ways:
1. First, in a
class diagram, explicitly type each role that the class plays in its
association with other classes. Doing this specifies the face
instances of that class put on in the context of the associated
object.
2. Second, also in
a class diagram, specify the class-to-type relationships using
generalization.
In an interaction
diagram, properly render each instance of the dynamically typed
class. Display the role of the instance in brackets below the
object’s name.
To show the
change in role of an object, render the object once for each roie it
plays in the interaction, and connect these objects with a message
stereotyped as become.
For example,
Figure 4-15 shows the roles that instances of the class Person might
play in the context of a human resources system.
Figure 1: Modeling
Static Types
This diagram
specifies that instances of the Person class may be any of the three
types namely. Candidate, Employee, or Retiree
Figure 1-2 shows the dynamic nature of a person’s
type. In this fragment of an interaction diagram, p (the Person
object) changes its role from Candidate to Employee
Figure
2: Modeling Dynamic Types
- What are the important factors that need to be considered to model a system from different views?
Answer:
Modeling
Different Views of a System:
When we model as
system from different views, we are in effect construction our system
simultaneously from multiple dimensions. By choosing the right set of
views, we set up a process that forces us to ask good questions about
our system and to expose risks that need to be attacked. If we do a
poor job of choosing these views or if we focus on one view at the
expense of all others, we run the risk of hiding issues and deferring
problems that will eventually destroy any chance of success.
To model a system
from different views;
- Decide which views we need to best express the architecture of our system and to expose the technical risks of our project. The five views: Use case, Design, Process, Implementation and Deployment.
- For each of these views, decide which artifacts we need to create to capture has essential details of that view.
- As part of our process planning, decide which of these diagrams we will want to put under some sort of formal or semi forma control. These are the diagrams for which we will want to schedule reviews and to preserve as documentation for the project.
- Allow room for diagrams that are thrown away. Such transitory diagrams are still useful for exploring the implications of your decisions and for experimenting with changes.
Example:
If we are modeling
a complex, distributed system, we will need to employ the full range
of the UML’s diagrams in order to express the architecture of our
system and the technical risks to our project as:
Use case view: Use
case diagrams, Activity diagrams for behavioral modeling
Design view:
Class diagrams for structural modeling, interaction diagram,
statechart diagrams
Process view:
Class diagrams for structural modeling, Interaction diagram for
behavioral modeling
Implementation
view: component diagrams
Deployment view: Deployment diagrams
- Explain the different components of states with reference to state machines.
Answer:
State Machines:
A
state machine is a behavior that specifies the sequences of states an
object goes through during its lifetime in response to events,
together with its responses to those events. We use state machines to
model the dynamic aspects of a system. The state of an object is a
condition or situation during the life of an object during which it
satisfies some condition, performs some activity, or waits for some
events.
An
event is the specification of a significant occurrence that has a
location in time and space. In the context of state machines, an
event is an occurrence of a stimulus that can trigger a state
transition.
A
transition is a relationship between two states indicating that an
object is the first state will perform certain actions and enter the
second state when a specified nonatomic execution within a state
machine.
A
action is an executable atomic computation that results in a change
in state of the model or the return of a value.
A state has
several parts:
Name- A textual
string that distinguishes the state from other states; a state may be
anonymous, meaning that it has no name.
Entry/Exit
actions-Actions executed on entering and ending the state,
respectively.
Internal
transitions- Transitions that are handled without causing a change in
state.
Substates- The
nested structure of a state, involving disjoint or concurrent
substates.
Deferred event- A list of events that are not
handled in that state but, rather, are postponed and queued for
handling by the object in another state.
- Explain the different stages to model a flow of control.
Answer:
Modeling a
Flow of Control:
To model a flow of
control
- Set the context for the interaction, whether it is the system as a whole, a class, or an individual operation.
- Set the stage for the interaction by identifying which objects play a role. Set their initial properties, including their attribute values, state, and role.
- If your model emphasizes the structural organization of these objects, identify the links that connect them, relevant to the paths of communication that take place in this interaction. Specify the nature of the links using the UML’s standard stereotypes and constraints, as necessary.
- In time order, specify the messages that pass from object to object. As necessary, distinguish the different kinds of messages, include parameter and return values to convey the necessary detail of this interaction.
- Also to convey the necessary detail of this interaction, adorn each object at every moment in time with its state and role.
For example:
Figure shows a set
of objects that interact in the context of a publish and subscribe
mechanism. This figure includes three objects: p(a
StockQuotePubliser), s1 and s2(both instances of
StockQuoteSubscriber). This figure is an example of a sequence
diagram, which emphasizes the time order of messages.
Figure: Flow of control by Time
February
2011
Master
of Computer Application (MCA) – Semester 2
MC0070
– Operating System with UNIX
Assignment
Set – 1
1.
Define process. Explain the major components of a process.
Answer:
A
process can be simply defined as a program in execution. Process
along with program code, comprises of program counter value,
Processor register contents, values of variables, stack and program
data.A process is created and terminated, and it follows some or all
of the states of process transition; such as New, Ready, Running,
Waiting, and Exit. Process is not the same as program.
A
process is more than a program code. A process is an ‘active’
entity as oppose to program which considered being a ‘passive’
entity. As we all know that a program is an algorithm expressed in
some programming language. Being a passive, a program is only a part
of process. Process, on the other hand, includes:
- Current value of Program Counter (PC)
- Contents of the processors register
- Value of the variables
- The process stack, which typically contains temporary data such as subroutine parameter, return address, and temporary variables.
- A data section that contains global variables.
- A process is the unit of work in a system.
A
process has certain attributes that directly affect execution, these
include:
PID
- The PID stands for the process identification. This is a unique
number that defines the process within the kernel.
PPID
- This is the processes Parent PID, the creator of the process.
UID
- The User ID number of the user that owns this process.
EUID
- The effective User ID of the process.
GID
- The Group ID of the user that owns this process.
EGID
- The effective Group User ID that owns this process.
Priority
- The priority that this process runs at.
To
view a process you use the ps command.
#
ps -l
F
S UID PID PPID C PRI NI P SZ:RSS WCHAN TTY TIME COMD
30
S 0 11660 145 1 26 20 * 66:20 88249f10 ttyq 6 0:00 rlogind
The
F field: This is the flag field. It uses hexadecimal values which
are added to show the value of the flag bits for the process. For a
normal user process this will be 30, meaning it is loaded into
memory.
The
S field: The S field is the state of the process, the two most
common values are S for Sleeping and R for Running. An important
value to look for is X, which means the process is waiting for memory
to become available.
PID
field: The PID shows the Process ID of each process. This value
should be unique. Generally PID are allocated lowest to highest, but
wrap at some point. This value is necessary for you to send a signal
to a process such as the KILL signal.
PRI
field: This stands for priority field. The lower the value the
higher the value. This refers to the process NICE value. It will
range from 0 to 39. The default is 20, as a process uses the CPU the
system will raise the nice value.
P
flag: This is the processor flag. On the SGI this refers to the
processor the process is running on.
SZ
field: This refers to the SIZE field. This is the total number of
pages in the process. Each page is 4096 bytes.
TTY
field: This is the terminal assigned to your process.
Time
field: The cumulative execution time of the process in minutes
and seconds.
COMD
field: The command that was executed.
In Process model, all
software on the computer is organized into a number of sequential
processes. A process includes PC, registers, and variables.
Conceptually, each process has its own virtual CPU. In reality, the
CPU switches back and forth among processes.
2.) Describe the
following:
A.)
Layered Approach
B.)
Micro Kernels
C.)
Virtual Machines
Answer:
Layered
Approach: With proper hardware support, operating
systems can be broken into pieces that are smaller and more
appropriate than those allowed by the original MS-DOS or UNIX
systems. The operating system can then retain much greater control
over the computer and over the applications that make use of that
computer. Implementers have more freedom in changing the inner
workings of the system and in creating modular operating systems.
Under the top-down approach, the overall functionality and features
are determined and the separated into components. Information hiding
is also important, because it leaves programmers free to implement
the low-level routines as they see fit, provided that the external
interface of the routine stays unchanged and that the routine itself
performs the advertised task.
A
system can be made modular in many ways. One method is the layered
approach, in which the operating system is broken up into a
number of layers (levels). The bottom layer (layer 0) id the
hardware; the highest (layer N) is the user interface.
Users
|
File
Systems
|
Inter-process
Communication
|
I/O
and Device Management
|
Virtual
Memory
|
Primitive
Process Management
|
Hardware
|
Fig.
2.2: Layered Architecture
An
operating-system layer is an implementation of an abstract object
made up of data and the operations that can manipulate those data. A
typical os layer-say, layer M-consists of data structures
and a set of routines that can be invoked by higher-level layers.
Layer M, in turn, can invoke operations on lower-level layers.
The
main advantage of the layered approach is simplicity of construction
and debugging. The layers are selected so that each uses functions
(operations) and services of only lower-level layers. This approach
simplifies debugging and system verification. The first layer can be
debugged without any concern for the rest of the system, because, by
definition, it uses only the basic hardware (which is assumed
correct) to implement its functions. Once the first layer is
debugged, its correct functioning can be assumed while the second
layer is debugged, and so on. If an error is found during debugging
of a particular layer, the error must be on that layer, because the
layers below it are already debugged. Thus, the design and
implementation of the system is simplified.
Each
layer is implemented with only those operations provided by
lower-level layers. A layer does not need to know how these
operations are implemented; it needs to know only what these
operations do. Hence, each layer hides the existence of certain data
structures, operations, and hardware from higher-level layers. The
major difficulty with the layered approach involves appropriately
defining the various layers. Because layer can use only lower-level
layers, careful planning is necessary. For example, the device driver
for the backing store (disk space used by virtual-memory algorithms)
must be at a lower level than the memory-management routines, because
memory management requires the ability to use the backing store.
Other
requirement may not be so obvious. The backing-store driver would
normally be above the CPU scheduler, because the driver may need to
wait for I/O and the CPU can be rescheduled during this time.
However, on a larger system, the CPU scheduler may have more
information about all the active processes than can fit in memory.
Therefore, this information may need to be swapped in and out of
memory, requiring the backing-store driver routine to be below the
CPU scheduler.
A
final problem with layered implementations is that they tend to be
less efficient than other types. For instance, when a user program
executes an I/O operation, it executes a system call that is trapped
to the I/O layer, which calls the memory-management layer, which in
turn calls the CPU-scheduling layer, which is then passed to the
hardware. At each layer, the parameters may be modified; data may
need to be passed, and so on. Each layer adds overhead to the system
call; the net result is a system call that takes longer than does one
on a non-layered system. These limitations have caused a small
backlash against layering in recent years. Fewer layers with more
functionality are being designed, providing most of the advantages of
modularized code while avoiding the difficult problems of layer
definition and interaction.
Micro-kernels:
When
the kernel became large and difficult to manage. In the mid-1980s,
researches at Carnegie Mellon University developed an operating
system called Mach that modularized the kernel using the
microkernel approach. This method structures the operating system by
removing all nonessential components from the kernel and implementing
then as system and user-level programs. The result is a smaller
kernel. There is little consensus regarding which services should
remain in the kernel and which should be implemented in user space.
Typically, however, micro-kernels provide minimal process and memory
management, in addition to a communication facility.
Device
Drivers
|
File
Server
|
Client
Process
|
….
|
Virtual
Memory
|
Microkernel
|
||||
Hardware
|
Fig.
2.3: Microkernel Architecture
The
main function of the microkernel is to provide a communication
facility between the client program and the various services that are
also running in user space. Communication is provided by message
passing. For example, if the client program and service never
interact directly. Rather, they communicate indirectly by exchanging
messages with the microkernel.
On
benefit of the microkernel approach is ease of extending the
operating system. All new services are added to user space and
consequently do not require modification of the kernel. When the
kernel does have to be modified, the changes tend to be fewer,
because the microkernel is a smaller kernel. The resulting operating
system is easier to port from one hardware design to another. The
microkernel also provided more security and reliability, since most
services are running as user – rather than kernel – processes, if
a service fails the rest of the operating system remains untouched.
Several
contemporary operating systems have used the microkernel approach.
Tru64 UNIX (formerly Digital UNIX provides a UNIX interface to the
user, but it is implemented with a March kernel. The March kernel
maps UNIX system calls into messages to the appropriate user-level
services.
The
following figure shows the UNIX operating system architecture. At the
center is hardware, covered by kernel. Above that are the UNIX
utilities, and command interface, such as shell (sh), etc.
Virtual
Machine
The
layered approach of operating systems is taken to its logical
conclusion in the concept of virtual machine. The fundamental idea
behind a virtual machine is to abstract the hardware of a single
computer (the CPU, Memory, Disk drives, Network Interface Cards, and
so forth) into several different execution environments and thereby
creating the illusion that each separate execution environment is
running its own private computer. By using CPU Scheduling and Virtual
Memory techniques, an operating system can create the illusion that a
process has its own processor with its own (virtual) memory. Normally
a process has additional features, such as system calls and a file
system, which are not provided by the hardware. The Virtual machine
approach does not provide any such additional functionality but
rather an interface that is identical to the underlying bare
hardware. Each process is provided with a (virtual) copy of the
underlying computer.
Hardware
Virtual machine
The
original meaning of virtual machine,
sometimes called a hardware virtual
machine, is that of a number of discrete identical execution
environments on a single computer, each of which runs an operating
system (OS). This can allow applications written for one OS to be
executed on a machine which runs a different OS, or provide execution
“sandboxes” which provide a greater level of isolation between
processes than is achieved when running multiple processes on the
same instance of an OS. One use is to provide multiple users the
illusion of having an entire computer, one that is their “private”
machine, isolated from other users, all on a single physical machine.
Another advantage is that booting and restarting a virtual machine
can be much faster than with a physical machine, since it may be
possible to skip tasks such as hardware initialization.
Such
software is now often referred to with the terms virtualization and
virtual servers. The host software which provides this capability is
often referred to as a virtual machine
Monitor
or hypervisor.
Software
virtualization can be done in three major ways:
Emulation, full
system simulation, or “full virtualization with dynamic
recompilation” — the virtual machine simulates the complete
hardware, allowing an unmodified OS for a completely different CPU to
be run.
Paravirtualization
— the virtual machine does not simulate hardware but instead offers
a special API that requires OS modifications. An example of this is
XenSource’s XenEnterprise (www.xensource.com)
Native
virtualization and “full virtualization” the virtual machine
only partially simulates enough hardware to allow an unmodified OS to
be run in isolation, but the guest OS must be designed for the same
type of CPU. The term native
virtualization
is also sometimes used to designate that hardware assistance through
Virtualization Technology is used.
Application
virtual machine:
Another meaning of virtual
machine is a piece of
computer software that isolates the application being used by the
user from the computer. Because versions of the virtual machine are
written for various computer platforms, any application written for
the virtual machine can be operated on any of the platforms, instead
of having to produce separate versions of the application for each
computer and operating system. The application is run on the computer
using an interpreter or Just in Time compilation. One of the best
known examples of an application virtual machine is Sun Microsystem’s
Java Virtual Machine.
- Memory management is important in operating systems. Discuss the main problems that can occur if memory is managed poorly.
Answer:
The
part of the operating system which handles this responsibility is
called the memory manager. Since every process must have some amount
of primary memory in order to execute, the performance of the memory
manager is crucial to the performance of the entire system.
Virtual memory refers to the technology in which some space in hard
disk is used as an extension of main memory so that a user program
need not worry if its size extends the size of the main memory.
For
paging memory management, each process is associated with a page
table. Each entry in the table contains the frame number of the
corresponding page in the virtual address space of the process. This
same page table is also the central data structure for virtual memory
mechanism based on paging, although more facilities are needed. It
covers the Control bits, Multi-level page table etc. Segmentation is
another popular method for both memory management and virtual memory
Basic
Cache Structure : The idea of cache memories is similar to
virtual memory in that some active portion of a low-speed memory is
stored in duplicate in a higher-speed cache memory. When a memory
request is generated, the request is first presented to the cache
memory, and if the cache cannot respond, the request is then
presented to main memory.
Content-Addressable
Memory (CAM) is a special type of computer memory used in certain
very high speed searching applications. It is also known as
associative memory, associative storage, or associative
array, although the last term is more often used for a
programming data structure.
In
addition to the responsibility of managing processes, the operating
system must efficiently manage the primary memory of the computer.
The part of the operating system which handles this responsibility is
called the memory manager. Since every process must have some amount
of primary memory in order to execute, the performance of the memory
manager is crucial to the performance of the entire system. Nutt
explains: “The memory manager is responsible for allocating primary
memory to processes and for assisting the programmer in loading and
storing the contents of the primary memory. Managing the sharing of
primary memory and minimizing memory access time are the basic goals
of the memory manager.”
The
real challenge of efficiently managing memory is seen in the case of
a system which has multiple processes running at the same time. Since
primary memory can be space-multiplexed, the memory manager can
allocate a portion of primary memory to each process for its own use.
However, the memory manager must keep track of which processes are
running in which memory locations, and it must also determine how to
allocate and de-allocate available memory when new processes are
created and when old processes complete execution. While various
different strategies are used to allocate space to processes
competing for memory, three of the most popular are Best fit, Worst
fit, and First fit. Each of these strategies are described below:
Best
fit: The allocator places a process in the smallest block of
unallocated memory in which it will fit. For example, suppose a
process requests 12KB of memory and the memory manager currently has
a list of unallocated blocks of 6KB, 14KB, 19KB, 11KB, and 13KB
blocks. The best-fit strategy will allocate 12KB of the 13KB block to
the process.
Worst
fit: The memory manager places a process in the largest block of
unallocated memory available. The idea is that this placement will
create the largest hold after the allocations, thus increasing the
possibility that, compared to best fit, another process can use the
remaining space. Using the same example as above, worst fit will
allocate 12KB of the 19KB block to the process, leaving a 7KB block
for future use.
First
fit: There may be many holes in the memory, so the operating
system, to reduce the amount of time it spends analyzing the
available spaces, begins at the start of primary memory and allocates
memory from the first hole it encounters large enough to satisfy the
request. Using the same example as above, first fit will allocate
12KB of the 14KB block to the process.
Notice
in the diagram above that the Best fit and First fit strategies both
leave a tiny segment of memory unallocated just beyond the new
process. Since the amount of memory is small, it is not likely that
any new processes can be loaded here. This condition of splitting
primary memory into segments as the memory is allocated and
deallocated is known as fragmentation. The Worst fit strategy
attempts to reduce the problem of fragmentation by allocating the
largest fragments to new processes. Thus, a larger amount of space
will be left as seen in the diagram above.
Another
way in which the memory manager enhances the ability of the operating
system to support multiple process running simultaneously is by the
use of virtual memory. According the Nutt, “virtual memory
strategies allow a process to use the CPU when only part of its
address space is loaded in the primary memory. In this approach, each
process’s address space is partitioned into parts that can be
loaded into primary memory when they are needed and written back to
secondary memory otherwise.” Another consequence of this approach
is that the system can run programs which are actually larger than
the primary memory of the system, hence the idea of “virtual
memory.” Brookshear explains how this is accomplished:
“Suppose,
for example, that a main memory of 64 megabytes is required but only
32 megabytes is actually available. To create the illusion of the
larger memory space, the memory manager would divide the required
space into units called pages and store the contents of these pages
in mass storage. A typical page size is no more than four kilobytes.
As different pages are actually required in main memory, the memory
manager would exchange them for pages that are no longer required,
and thus the other software units could execute as though there were
actually 64 megabytes of main memory in the machine.”
In
order for this system to work, the memory manager must keep track of
all the pages that are currently loaded into the primary memory. This
information is stored in a page table maintained by the memory
manager. A page fault occurs whenever a process requests a page that
is not currently loaded into primary memory. To handle page faults,
the memory manager takes the following steps:
- The memory manager locates the missing page in secondary memory.
- The page is loaded into primary memory, usually causing another page to be unloaded.
- The page table in the memory manager is adjusted to reflect the new state of the memory.
- The processor re-executes the instructions which caused the page fault.
- Discuss the following:
A)
File Substitution
B)
I/O Control
Answer:
File
Substitution : It is important to understand how
file substitution actually works. In the previous examples, the ls
command doesn’t do the work of file substitution –the shell does.
Even though all the previous examples employ the ls command, any
command that accepts filenames on the command line can use file
substitution. In fact, using the simple echo command is a good way to
experiment with file substitution without having to worry about
unexpected results.
For
example:-
$
echo p*
p10
p101 p11
When
a metacharacter is encountered in a UNIX command, the shell looks for
patterns in filenames that match the metacharacter. When a match is
found, the shell substitutes the actual filename in place of the
string containing the metacharacter so that the command sees only a
list of valid filenames. If the shell finds no filenames that match
the pattern, it passes an empty string to the command.
The
shell can expand more than one pattern on a single line. Therefore,
the shell interprets the command
$
ls LINES.* PAGES.*
as
$
ls LINES.dat LINES.idx PAGES.dat PAGES.idx
There
are file substitution situations that you should be wary of. You
should be careful about the use of whitespace (extra blanks) in a
command line. If you enter the following command, for example, the
results might surprise you:
What
has happened is that the shell interpreted the first parameter as the
filename LINES. with no metacharacters and passed it directly on to
ls. Next, the shell saw the single asterisk (*), and matched it to
any character string, which matches every file in the directory. This
is not a big problem if you are simply listing the files, but it
could mean disaster if you were using the command to delete data
files! Unusual results can also occur if you use the period (.) in a
shell command. Suppose that you are using the
$
ls .*
Command
to view the hidden files. What the shell would see after it finishes
interpreting the meta character is
$
ls . .. .profile
Which
gives you a complete directory listing of both the current and parent
directories.
When
you think about how filename substitution works, you might assume
that the default form of the ls command is actually
$
ls *
However,
in this case the shell passes to ls the names of directories, which
causes ls to list all the files in the subdirectories. The actual
form of the default ls command is
$
ls .
I/O
Control :-Either
the hardware that controls the transfer of data between main memory
and peripheral devices, or the part of the system software that in
turn controls that hardware.
Many
I/O tasks can be complex and require logic to be applied to the data
to convert formats and other similar duties. In these situations, the
simplest solution is to ask the CPU
to handle the logic, but because I/O devices are relatively slow, a
CPU could waste time (in computer perspective) waiting for the data
from the device. This situation is called 'I/O bound'.
Channel
architecture avoids this pr5. Discuss the concept of File
substitution with respect to managing data files in UNIX. oblem by
using a separate, independent, low-cost processor. Channel processors
are simple, but self-contained, with minimal logic and sufficient
on-board scratchpad memory (working storage) to handle I/O tasks.
They are typically not powerful or flexible enough to be used as a
computer on their own and can be construed as a form of coprocessor.
A
CPU sends relatively small channel
programs to the controller via the channel to handle I/O tasks,
which the channel and controller can, in many cases, complete without
further intervention from the CPU (exception: those channel programs
which utilize 'program controlled interrupts', PCIs, to facilitate
program loading, demand paging and other essential system tasks).
When
I/O transfer is complete or an error is detected, the controller
communicates with the CPU through the channel using an interrupt.
Since the channel has direct access to the main memory, it is also
often referred to as DMA
controller
(where DMA stands for direct
memory access), although that term is looser in definition and is
often applied to non-programmable devices as well.
5. Discuss the concept of File
substitution with respect to managing data files in UNIX.
Answer:
File
Substitution Works: It is important to understand how file
substitution actually works. In the previous examples, the ls
command doesn’t do the work of file substitution –the shell
does. Even though all the previous examples employ the ls command,
any command that accepts filenames on the command line can use file
substitution. In fact, using the simple echo command is a good way
to experiment with file substitution without having to worry about
unexpected results.
For
example:-
$
echo p*
p10
p101 p11
When
a metacharacter is encountered in a UNIX command, the shell looks
for patterns in filenames that match the metacharacter. When a match
is found, the shell substitutes the actual filename in place of the
string containing the metacharacter so that the command sees only a
list of valid filenames. If the shell finds no filenames that match
the pattern, it passes an empty string to the command.
The
shell can expand more than one pattern on a single line. Therefore,
the shell interprets the command
$
ls LINES.* PAGES.*
as
$
ls LINES.dat LINES.idx PAGES.dat PAGES.idx
There
are file substitution situations that you should be wary of. You
should be careful about the use of whitespace (extra blanks) in a
command line. If you enter the following command, for example, the
results might surprise you:
What
has happened is that the shell interpreted the first parameter as
the filename LINES. with no metacharacters and passed it directly on
to ls. Next, the shell saw the single asterisk (*), and matched it
to any character string, which matches every file in the directory.
This is not a big problem if you are simply listing the files, but
it could mean disaster if you were using the command to delete data
files! Unusual results can also occur if you use the period (.) in
a shell command. Suppose that you are using the
$
ls .*
Command
to view the hidden files. What the shell would see after it finishes
interpreting the meta character is
$
ls . .. .profile
Which
gives you a complete directory listing of both the current and
parent directories.
When
you think about how filename substitution works, you might assume
that the default form of the ls command is actually
$
ls *
However,
in this case the shell passes to ls the names of directories, which
causes ls to list all the files in the subdirectories. The actual
form of the default ls command is
$
ls
- How do we make calculations using dc and bc utilities? Describe with at least two examples in each case.
Answer:
Making
Calculations with dc and bc : UNIX has two calculator programs
that you can use from the command line: dc and bc. The dc (desk
calculator) program uses Reverse Polish Notation (RPN), familiar to
everyone who has used Hewlett-Packard pocket calculators, and the bc
(basic calculator) program uses the more familiar algebraic notation.
Both programs perform essentially the same calculations.
Calculating
with bc
The
basic calculator, bc, can do calculations to any precision that you
specify. Therefore, if you know how to calculate pi and want to know
its value to 20, 50, or 200 places, for example, use bc. This tool
can add, subtract, multiply, divide, and raise a number to a power.
It can take square roots, compute sines and cosines of angles,
calculate exponentials and logarithms, and handle arctangents and
Bessel functions. In addition, it contains a programming language
whose syntax looks much like that of the C programming language. This
means that you can use the following:
·
Simple and array variables
· Expressions
· Tests and
loops
· Functions that
you define
Also, bc can take
input from the keyboard, from a file, or from both.
Here are some
examples of bc receiving input from the keyboard:
$ bc
2*3
6
To do
multiplication, all you have to do is enter the two values with an
asterisk between them. To exit from bc, just type Ctrl+d.
However, you can also continue giving bc more calculations to do.
Here’s a simple
square root calculation (as a continuation of the original bc
command):
sqrt(11)
3
Oops! The default
behavior of bc is to treat all numbers as integers. To get
floating-point numbers (that is, numbers with decimal points in
them), use the scale command. For example, the following input tells
bc that you want it to set four decimal places and then try the
square root example again:
scale=4
sqrt(11)
3.3166
In addition to
setting the number of decimal places with scale, you can set the
number of significant digits with length.
You need not
always use base-10 for all your calculations, either. For example,
suppose that you want to calculate the square root of the base-8
(octal) number, 11. First change the input base to 8 and then enter
the same square root command as before to do the calculation:
ibase=8
sqrt(11)
3.0000
Ctrl+D
$
This result is
correct because octal 11 is decimal 9 and the square root of 9 is 3
in both octal and decimal.
You can use a
variable even without a program:
$ bc
x=5
10*x
50
Here’s a simple
loop in bc’s C-like syntax:
y=1
while(y<5){
y^2
y=y+1
}
1
4
9
16
The first line
sets y to the value 1. The next four lines establish a loop: the
middle two lines repeat as long as the value of y is less than 5
(while(y<5)). Those two repeated lines cause bc to print the value
of y-squared and then add one to the value of y. Note that bc doesn’t
display the value of a variable when it’s on a line with an equals
sign (or a while statement). Also, note the positions of the braces.
Here’s another,
more compact kind of loop. It sets the initial value for y, tests the
value of y, and adds one to the value of y, all on one line:
for (y = 1; y <=
5; y = y + 1){
3*y
}
3
6
9
12
15
Initially, y is
set to 1. Then the loop tests whether the variable is less than or
equal to 5. Because it is, bc performs the calculation 3*y and prints
3. Next, 1 is added to the present value of y, making it 2. That’s
also less than 5, so bc performs the 3*y calculation, which results
in 6 being printed. y is incremented to 3, which is then tested;
because 3 is less than 5, 3*y is calculated again. At some point, bc
increments y to 6, which is neither less than 5 nor equal to it, so
that the loop terminates with no further calculation or display.
You can define and
use new functions for the bc program. A bc function is a device that
can take in one or more numbers and calculate a result. For example,
the following function, s, adds three numbers:
define s(x,y,z){
return(x+y+z)
}
To use the s
function, you enter a command such as the following:
s(5,9,22)
36
Each variable name
and each function name must be a single lowercase letter. If you are
using the math library, bc -l, (discussed below), the letters a, c,
e, j, l, and s are already used.
If you have many
functions that you use fairly regularly, you can type them into a
text file and start bc by entering bc myfile.bc (where myfile is the
name of text file). The bc program then knows those functions and you
can invoke them without having to type their definitions again. If
you use a file to provide input to bc, you can put comments in that
file. When bc reads the file, it ignores anything that you type
between /* and */.
If scale is 0, the
bc program does modulus division (using the % symbol), which provides
the remainder that results from the division of two integers, as in
the following example:
scale=4
5/2
2.5000
5%2
0
scale=0
5/2
2
5%2
1
If scale is not 0,
the numbers are treated as floating point even if they are typed as
integers.
In addition to
including C’s increment operators (++ and —), bc also provides
some special assignment operators: +=, -=, *=, /=, and ^=.
The built-in math
functions include the following:
- FunctionReturnsa(x)The arc tangent of xc(x)The cosine of xe(x)e raised to the x powerj(n,x)The Bessel function of n and x, where n is an integer and x is any real numberl(x)The natural logarithm of xs(x)The sine of x
To use these math
functions, you must invoke bc with the -l option, as follows:
$ bc -l
Calculating
with dc
As mentioned
earlier, the desk calculator, dc, uses RPN, so unless you’re
comfortable with that notation, you should stick with bc. Also, dc
does not provide a built-in programming language, built-in math
functions, or the capability to define functions. It can, however,
take its input from a file.
If you are
familiar with stack-oriented calculators, you’ll find that dc is an
excellent tool. It can do all the calculations that bc can and it
also lets you manipulate the stack directly.
To display values,
you must enter the p command. For example, to add and print the sum
of 5 and 9, enter
5
9
+p
14
No comments:
Post a Comment