Thursday, July 3, 2014

Serial EEPROM v/s Parallel EEPROM

                              Serial EEPROM  vs. Parallel:-
SERIAL EEPROMS
The main feature that makes a device a “Serial” and sets it apart from parallel 
devices is, as its name implies, the ability to communicate through
 a serial interface. This
ability has numerous benefits. First, 
serial communication
is accomplished with a minimum number of I/O’s.
Serial EEPROMs require only two to four lines (depending
on the hardware and software protocol) for complete
communication; memory addressing, data input and
output, and device control. Thus, the hardware interface
requirements for Serial EEPROMs are kept at a minimum.
The most common Serial EEPROMs in use today
are devices that utilize a 2-wire protocol.
Another benefit of serial communication is package size.
Ranging from densities of 256 to 16K bits, most Serial
EEPROMs today are available in space-saving 8 pin
PDIP and 150mil wide SOIC packaging. This obviously
is very beneficial for applications where product size and
weight is a key design factor. The final benefit is low
current consumption. Due to a limited number of I/O
ports and therefore on-chip support requirements, operating
currents for Serial EEPROMs are usually well
below 3 milliamperes.
Other features of Serial EEPROMs include: 1) Byte
programmability—The ability to erase and program one
byte at a time without affecting the contents of the other
memory locations in the array; 2) Clock rates of up to
6MHz—2-wire devices are rated at 100KHz and 400KHz
per the standard I2C protocol, while 3-wire devices can
be operated at 6M Hz rates; 3) Low voltage operation—
Microchip has introduced a family of devices that operate,
both read and write, down to 1.8V. This family
complements other 2V and 2.5V low voltage Serial
EEPROM families available.
PARALLEL NON-VOLATILE
MEMORIES
There are a number of memory devices that fall into this
category. The major ones include Parallel EEPROMs,
Flash memory products, EPROMs, and SRAMs with
battery back-up.
The main common feature of all of these devices is that
communication with the device is done through a parallel
interface, which results in a high system clock rate.
Each type of device has separate data, address and
control lines. Thus pin counts are in the 24 to 40 pin
range. This also results in relatively large and costly
packages and large footprints, even with the most advanced
surface mount packages like TSOP. SRAMs
with on-board batteries require DIP package heights
that are significantly higher than those of standard DIP
packages, adding to its package size and cost disadvantage.
Parallel EEPROM and battery backed-up SRAMs are
the only two of the four major types of parallel nonvolatile
memories that have the capability to erase and
program one byte at a time. EPROM and Flash devices
require the whole array or at least large sectors to be
erased prior to reprogramming.
SERIAL VERSUS PARALLEL
Serial EEPROMs have five major advantages over
parallel non-volatile memories.
1) Lower Current Consumption - The maximum operating
current (at 5 volts operating voltage) of a 16K
serial EEPROM device is approximately an order of
magnitude less than that of an equivalent density
parallel EEPROM. Operating currents for 16K
Serials are specified at 3mA, while 16K parallel
devices are specified at 30mA and above. This
relationship will continue as 64K serial devices are
introduced. Since power consumption is directly
proportional to current consumption, the lower the
current the lower the power consumption.
2) Lower Voltage - Serial EEPROMs have been available
in single supply low voltage options for some
time. As mentioned above, Microchip has low
voltage Serial EEPROMs that operate down to
1.8V, as well as other low voltage Serials that
function down to 2.0V or 2.5V volts. 3V EPROMs
Serial EEPROM Solutions vs. Parallel Solutions
Serial EEPROM Solutions vs. Parallel Solutions
and parallel EEPROMs and single voltage 5V flash
devices are just being introduced to the market.
(Most flash devices on the market today require
12V for programming in addition to the 5 volts
required for normal operation). Low voltage operation
also has a positive effect on power consumption.
A reduction in the operating voltage from 5
volts to 1.8 volts will result in a power consumption
reduction of almost 90% and almost a 65% reduction
in power if the operating voltage is reduced from
3V to 1.8V.
3) Programmability - Neither currently available Flash
devices nor EPROMs have the ability to program
one byte at a time. Erasing is an array or sector
function. Therefore, whenever one byte needs to be
reprogrammed the entire array or sector must be
reprogrammed. This procedure takes a relatively
long amount of time to complete, time which may
not be available, as is the case when storing critical
parameters or data during inadvertent and unexpected
system power loss. This procedure also
requires software overhead to manage the retrieval
and reprogramming operation.
4) Physical Size - Again, when comparing a 16K Serial
EEPROM to a 16K parallel device the serial has a
significant advantage. The area of the 150mil 8 pin
SOIC footprint is less than 50K square mils. This
compares to an area of more than 250K square mils
for a 24 pin SOIC and almost 800K square mils for
a 24 pin 500mil DIP package footprint.
5) I/O Requirements - Serials only require 2 to 4 input
or output lines for complete communications. Most
parallel devices require at least 22 lines, depending
on the memory density. This results in increased
microcontroller/microprocessor overhead and additional
real estate to accommodate the numerous
hardware lines.
The advantages that parallel devices currently have
over serial EEPROMs is memory density and AC performance.
However, in most microcontroller based applications
for which Serial EEPROMs are intended, high
density and AC are not the most critical design issues or
most needed product features.
The key benefits of Serial EEPROM solutions as a result
of the advantages outlined above, are reduced system
costs, enhanced feature sets, and improved system
performance. System size and weight is reduced and
power sourcing requirements are kept at a minimum.
The following graph compares some of the main attributes
of a 16K Serial EEPROM device to a 16K
Parallel device.
8
Serial EEPROM Solutions vs. Parallel Solutions
USES AND APPLICATIONS OF
SERIAL EEPROMS
Uses of Serial EEPROMs
The days of simply being a DIP switch replacement for
Serial EEPROMs is over. Here is a list of the functions
that Serial EEPROMs perform in a variety of computer,
industrial, telecommunication, automotive and consumer
applications:
1) Memory storage of channel selectors or analog
controls (volume, tone, etc.)
2) Power down storage and retrieval of events such as
fault detection or error diagnostics
3) Electronic real time event or maintenance log such
as page counting
4) Configuration storage
5) Last number redial and speed dial storage
6) User in-circuit look-up tables
Serial EEPROM Applications
Serial EEPROMs have found homes in hundreds of
embedded control applications in all major application
markets. The following list demonstrates the number
and variety of applications for serial EEPROMs.
Market Applications
CONSUMER TV tuners, VCRs, CD players, cameras,
radios, and remote controls
COMPUTER/OA Printers, copiers, PCs, palmtop and
portable computers, disk drives and
organizers
INDUSTRIAL Bar code readers, point-of-sale terminals,
smart cards, lock boxes,
garage door openers, test measurement
equipment and medical equipment
TELECOMM Cellular, cordless and full feature
phones, faxes, modems, pagers, and
satellite receivers
AUTOMOTIVE Air bags, anti-lock brakes, odometers,
radios and keyless entry
Using Serial EEPROMs for critical data and configuration
storage has only recently become a reality. The
current offerings of 2- and 3-wire serial devices offers the
systems designer interesting alternatives to the standard
parallel EEPROM devices. The Serial EEPROM is
basically a standard EEPROM array without the normal
parallel data and address I/O. These functions are
handled via serial I/O ports coupled with internal selftimed
state machines. Not only will the serial device
Serial EEPROM Solutions vs. Parallel Solutions
save power, board space, and cost, but they also offer
the advantage of fewer I/O and consequently power in
the embedded microcontroller because less I/O are
needed to control the same functions. A typical embedded
application is shown in Figure A, depicting a controller
and several functions used in a personal communications
device, such as a mobile or portable phone. The
EEPROM stores speed dial and last number redial
numbers, credit card numbers, ID numbers, and configuration
parameters.
Figure B shows these same functions using a controller
with fewer I/O and a Serial EEPROM. There is no loss of
functionality but a significant savings in current, board
space, I/O pads, and cost. The serial solution employs
8 to 16 less I/O on the microcontroller, freeing up much
needed functionality, and possibly allowing for a much
smaller device package and downsized circuit boards.
SUMMARY
Serial EEPROMs are ideal cost effective solutions to all
non-volatile memory embedded control applications
that require: 1) A small footprint space saving format;
2) The ability and ease of programming one byte at a
time; 3) Low current consumption and low operating
voltage; 4) Low microcontroller overhead and support;
and, 5) The best price performance non-volatile memory
solution available.
Their size, ease of programmability, low power consumption,
and low cost make Serial EEPROMs extremely
suitable for all the fast growing handheld and
portable battery powered computer, personal communications,
medical and industrial markets.

Kernel Locking Techniques

Why Do We Need Locking in the Kernel?
The fundamental issue surrounding locking is the need to provide synchronization in certain code paths in the kernel. These code paths, called critical sections, require some combination of concurrency or re-entrancy protection and proper ordering with respect to other events. The typical result without proper locking is called a race condition. Realize how even a simple i++ is dangerous if i is shared! Consider the case where one processor reads i, then another, then they both increment it, then they both write i back to memory. If i were originally 2, it should now be 4, but in fact it would be 3!
This is not to say that the only locking issues arise from SMP (symmetric multiprocessing). Interrupt handlers create locking issues, as does the new preemptible kernel, and any code can block (go to sleep). Of these, only SMP is considered true concurrency, i.e., only with SMP can two things actually occur at the exact same time. The other situations—interrupt handlers, preempt-kernel and blocking methods—provide pseudo concurrency as code is not actually executed concurrently, but separate code can mangle one another's data.
These critical regions require locking. The Linux kernel provides a family of locking primitives that developers can use to write safe and efficient code.
SMP Locks in a Uniprocessor Kernel
Whether or not you have an SMP machine, people who use your code may. Further, code that does not handle locking issues properly is typically not accepted into the Linux kernel. Finally, with a preemptible kernel even UP (uniprocessor) systems require proper locking. Thus, do not forget: you must implement locking.
Thankfully, Linus made the excellent design decision of keeping SMP and UP kernels distinct. This allows certain locks not to exist at all in a UP kernel. Different combinations of CONFIG_SMP and CONFIG_PREEMPT compile in varying lock support. It does not matter, however, to the developer: lock everything appropriately and all situations will be covered.
Atomic Operators
We cover atomic operators initially for two reasons. First, they are the simplest of the approaches to kernel synchronization and thus the easiest to understand and use. Second, the complex locking primitives are built off them. In this sense, they are the building blocks of the kernel's locks. Atomic operators are operations, like add and subtract, which perform in one uninterruptible operation. Consider the previous example of i++. If we could read i, increment it and write it back to memory in one uninterruptible operation, the race condition discussed above would not be an issue. Atomic operators provide these uninterruptible operations. Two types exist: methods that operate on integers and methods that operate on bits. The integer operations work like this:
atomic_t v;
atomic_set(&v, 5);  /* v = 5 (atomically) */
atomic_add(3, &v);  /* v = v + 3 (atomically) */
atomic_dec(&v);             /* v = v - 1 (atomically) */
printf("This will print 7: %d\n", atomic_read(&v));
They are simple. There are, however, little caveats to keep in mind when using atomics. First, you obviously cannot pass an atomic_t to anything but one of the atomic operators. Likewise, you cannot pass anything to an atomic operator except an atomic_t. Finally, because of the limitations of some architectures, do not expect atomic_t to have more than 24 usable bits. See the “Function Reference” Sidebar for a list of all atomic integer operations.
Function Reference
The next group of atomic methods is those that operate on individual bits. They are simpler than the integer methods because they work on the standard C data types. For example, consider void set_bit(int nr, void *addr). This function will atomically set to 1 the “nr-th” bit of the data pointed to by addr. The atomic bit operators are also listed in the “Function Reference” Sidebar.
Spinlocks
For anything more complicated than trivial examples like those above, a more complete locking solution is needed. The most common locking primitive in the kernel is the spinlock, defined in include/asm/spinlock.h and include/linux/spinlock.h. The spinlock is a very simple single-holder lock. If a process attempts to acquire a spinlock and it is unavailable, the process will keep trying (spinning) until it can acquire the lock. This simplicity creates a small and fast lock. The basic use of the spinlock is:
spinlock_t mr_lock = SPIN_LOCK_UNLOCKED;
unsigned long flags;
spin_lock_irqsave(&mr_lock, flags);
/* critical section ... */
spin_unlock_irqrestore(&mr_lock, flags);
The use of spin_lock_irqsave() will disable interrupts locally and provide the spinlock on SMP. This covers both interrupt and SMP concurrency issues. With a call to spin_unlock_irqrestore(), interrupts are restored to the state when the lock was acquired. With a UP kernel, the above code compiles to the same as:
unsigned long flags;
save_flags(flags);
cli();
/* critical section ... */
restore_flags(flags);
which will provide the needed interrupt concurrency protection without unneeded SMP protection. Another variant of the spinlock is spin_lock_irq(). This variant disables and re-enables interrupts unconditionally, in the same manner as cli() and sti(). For example:
spinlock_t mr_lock = SPIN_LOCK_UNLOCKED;
spin_lock_irq(&mr_lock);
/* critical section ... */
spin_unlock_irq(&mr_lock);
This code is only safe when you know that interrupts were not already disabled before the acquisition of the lock. As the kernel grows in size and kernel code paths become increasingly hard to predict, it is suggested you not use this version unless you really know what you are doing. All of the above spinlocks assume the data you are protecting is accessed in both interrupt handlers and normal kernel code. If you know your data is unique to user-context kernel code (e.g., a system call), you can use the basic spin_lock() and spin_unlock() methods that acquire and release the specified lock without any interaction with interrupts.
A final variation of the spinlock is spin_lock_bh() that implements the standard spinlock as well as disables softirqs. This is needed when you have code outside a softirq that is also used inside a softirq. The corresponding unlock function is naturally spin_unlock_bh().
Note that spinlocks in Linux are not recursive as they may be in other operating systems. Most consider this a sane design decision as recursive spinlocks encourage poor code. This does imply, however, that you must be careful not to re-acquire a spinlock you already hold, or you will deadlock.
Spinlocks should be used to lock data in situations where the lock is not held for a long time—recall that a waiting process will spin, doing nothing, waiting for the lock. (See the “Rules” Sidebar for guidelines on what is considered a long time.) Thankfully, spinlocks can be used anywhere. You cannot, however, do anything that will sleep while holding a spinlock. For example, never call any function that touches user memory, kmalloc() with the GFP_KERNEL flag, any semaphore functions or any of the schedule functions while holding a spinlock. You have been warned.
If you need a lock that is safe to hold for longer periods of time, safe to sleep with or capable of allowing concurrency to do more than one process at a time, Linux provides the semaphore.

Semaphores
Semaphores in Linux are sleeping locks. Because they cause a task to sleep on contention, instead of spin, they are used in situations where the lock-held time may be long. Conversely, since they have the overhead of putting a task to sleep and subsequently waking it up, they should not be used where the lock-held time is short. Since they sleep, however, they can be used to synchronize user contexts whereas spinlocks cannot. In other words, it is safe to block while holding a semaphore.
In Linux, semaphores are represented by a structure, struct semaphore, which is defined in include/asm/semaphore.h. The structure contains a pointer to a wait queue and a usage count. The wait queue is a list of processes blocking on the semaphore. The usage count is the number of concurrently allowed holders. If it is negative, the semaphore is unavailable and the absolute value of the usage count is the number of processes blocked on the wait queue. The usage count is initialized at runtime via sema_init(), typically to 1 (in which case the semaphore is called a mutex).
Semaphores are manipulated via two methods: down (historically P) and up (historically V). The former attempts to acquire the semaphore and blocks if it fails. The later releases the semaphore, waking up any tasks blocked along the way.
Semaphore use is simple in Linux. To attempt to acquire a semaphore, call the down_interruptible() function. This function decrements the usage count of the semaphore. If the new value is less than zero, the calling process is added to the wait queue and blocked. If the new value is zero or greater, the process obtains the semaphore and the call returns 0. If a signal is received while blocking, the call returns -EINTR and the semaphore is not acquired.
The up() function, used to release a semaphore, increments the usage count. If the new value is greater than or equal to zero, one or more tasks on the wait queue will be woken up:
struct semaphore mr_sem;
sema_init(&mr_sem, 1);      /* usage count is 1 */
if (down_interruptible(&mr_sem))
  /* semaphore not acquired; received a signal ... */
/* critical region (semaphore acquired) ... */
up(&mr_sem);
The Linux kernel also provides the down() function, which differs in that it puts the calling task into an uninterruptible sleep. A signal received by a process blocked in uninterruptible sleep is ignored. Typically, developers want to use down_interruptible(). Finally, Linux provides the down_trylock() function, which attempts to acquire the given semaphore. If the call fails, down_trylock() will return nonzero instead of blocking.
Reader/Writer Locks
In addition to the standard spinlock and semaphore implementations, the Linux kernel provides reader/writer variants that divide lock usage into two groups: reading and writing. Since it is typically safe for multiple threads to read data concurrently, so long as nothing modifies the data, reader/writer locks allow multiple concurrent readers but only a single writer (with no concurrent readers). If your data access naturally divides into clear reading and writing patterns, especially with a greater amount of reading than writing, the reader/writer locks are often preferred.
The reader/writer spinlock is called an rwlock and is used similarly to the standard spinlock, with the exception of separate reader/writer locking:
rwlock_t mr_rwlock = RW_LOCK_UNLOCKED;
read_lock(&mr_rwlock);
/* critical section (read only) ... */
read_unlock(&mr_rwlock);
write_lock(&mr_rwlock);
/* critical section (read and write) ... */
write_unlock(&mr_rwlock);
Likewise, the reader/writer semaphore is called an rw_semaphore and use is identical to the standard semaphore, plus the explicit reader/writer locking:
struct rw_semaphore mr_rwsem;
init_rwsem(&mr_rwsem);
down_read(&mr_rwsem);
/* critical region (read only) ... */
up_read(&mr_rwsem);
down_write(&mr_rwsem);
/* critical region (read and write) ... */
up_write(&mr_rwsem);
Use of reader/writer locks, where appropriate, is an appreciable optimization. Note, however, that unlike other implementations reader locks cannot be automatically upgraded to the writer variant. Therefore, attempting to acquire exclusive access while holding reader access will deadlock. Typically, if you know you will need to write eventually, obtain the writer variant of the lock from the beginning. Otherwise, you will need to release the reader lock and re-acquire the lock as a writer. If the distinction between code that writes and reads is muddled such as this, it may be indicative that reader/writer locks are not the best choice.

Big-Reader LocksBig-reader locks (brlocks), defined in include/linux/brlock.h, are a specialized form of reader/writer locks. Big-reader locks, designed by Red Hat's Ingo Molnar, provide a spinning lock that is very fast to acquire for reading but incredibly slow to acquire for writing. Therefore, they are ideal in situations where there are many readers and few writers.
While the behavior of brlocks is different from that of rwlocks, their usage is identical with the lone exception that brlocks are predefined in brlock_indices (see brlock.h):
br_read_lock(BR_MR_LOCK);
/* critical region (read only) ... */
br_read_unlock(BR_MR_LOCK);
Use of brlocks is currently confined to a few special cases. Due to the large penalty for exclusive write access, it should probably stay that way.
The Big Kernel Lock
Linux contains a global kernel lock, kernel_flag, that was originally introduced in kernel 2.0 as the only SMP lock. During 2.2 and 2.4, much work went into removing the global lock from the kernel and replacing it with finer-grained localized locks. Today, the global lock's use is minimal. It still exists, however, and developers need to be aware of it.
The global kernel lock is called the big kernel lock or BKL. It is a spinning lock that is recursive; therefore two consecutive requests for it will not deadlock the process (as they would for a spinlock). Further, a process can sleep and even enter the scheduler while holding the BKL. When a process holding the BKL enters the scheduler, the lock is dropped so other processes can obtain it. These attributes of the BKL helped ease the introduction of SMP during the 2.0 kernel series. Today, however, they should provide plenty of reason not to use the lock.
Use of the big kernel lock is simple. Call lock_kernel() to acquire the lock and unlock_kernel() to release it. The routine kernel_locked() will return nonzero if the lock is held, zero if not. For example:
lock_kernel();
/* critical region ... */
unlock_kernel();
Preemption Control
Starting with the 2.5 development kernel (and 2.4 with an available patch), the Linux kernel is fully preemptible. This feature allows processes to be preempted by higher-priority processes, even if the current process is running in the kernel. A preemptible kernel creates many of the synchronization issues of SMP. Thankfully, kernel preemption is synchronized by SMP locks, so most issues are solved automatically by writing SMP-safe code. A few new locking issues, however, are introduced. For example, a lock may not protect per-CPU data because it is implicitly locked (it is safe because it is unique to each CPU) but is needed with kernel preemption.
For these situations, preempt_disable() and the corresponding preempt_enable() have been introduced. These methods are nestable such that for each n preempt_disable() calls, preemption will not be re-enabled until the nth preempt_enable() call. See the “Function Reference” Sidebar for a complete list of preemption-related controls.
Conclusion
Both SMP reliability and scalability in the Linux kernel are improving rapidly. Since SMP was introduced in the 2.0 kernel, each successive kernel revision has improved on the previous by implementing new locking primitives and providing smarter locking semantics by revising locking rules and eliminating global locks in areas of high contention. This trend continues in the 2.5 kernel. The future will certainly hold better performance.
Kernel developers should do their part by writing code that implements smart, sane, proper locking with an eye to both scalability and reliability.

SSH session automatic login and crontab usage for scheduling task based on particular time

  1. #!/usr/bin/expect -f
  2. # Expect script to supply root/admin password for remote ssh server
  3. # and execute command.
  4. # This script needs three argument to(s) connect to remote server:
  5. # password = Password of remote UNIX server, for root user.
  6. # ipaddr = IP Addreess of remote UNIX server, no hostname
  7. # scriptname = Path to remote script which will execute on remote server
  8. # For example:
  9. # ./sshlogin.exp password 192.168.1.11 who
  10. # ------------------------------------------------------------------------
  11. # Copyright (c) 2004 nixCraft project <http://cyberciti.biz/fb/>
  12. # This script is licensed under GNU GPL version 2.0 or above
  13. # -------------------------------------------------------------------------
  14. # This script is part of nixCraft shell script collection (NSSC)
  15. # Visit http://bash.cyberciti.biz/ for more information.
  16. # ----------------------------------------------------------------------
  17. # set Variables
  18. set password [lrange $argv 0 0]
  19. set ipaddr [lrange $argv 1 1]
  20. set scriptname [lrange $argv 2 2]
  21. set arg1 [lrange $argv 3 3]
  22. set timeout -1
  23. # now connect to remote UNIX box (ipaddr) with given script to execute
  24. spawn ssh root@$ipaddr $scriptname $arg1
  25. match_max 100000
  26. # Look for passwod prompt
  27. expect "*?assword:*"
  28. # Send password aka $password
  29. send -- "$password\r"
  30. # send blank line (\r) to make sure we get back to gui
  31. send -- "\r"
  32. expect eof
    With courtesy:
    http://bash.cyberciti.biz/security/expect-ssh-login-script/
     mkitab "lok:2:once:/home/lokesh/lok.sh > dev/console"